WorldWideScience

Sample records for sample plot size

  1. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  2. The Effect of Plot Size on Some Pratylenchus Penetrans ...

    African Journals Online (AJOL)

    Pratylenchus penetrans counts obtained from a rose field, sampled sequentially by decreasing the plot sizes were computed to obtain the respective sample means, variance and k-value of the negative binomial distribution. Plots 21 m x 80 m, 3.6 m x 3.6 m and 0.6 m x 0.6 m were sampled for the nematode. It is reported ...

  3. Effects of plot size on forest-type algorithm accuracy

    Science.gov (United States)

    James A. Westfall

    2009-01-01

    The Forest Inventory and Analysis (FIA) program utilizes an algorithm to consistently determine the forest type for forested conditions on sample plots. Forest type is determined from tree size and species information. Thus, the accuracy of results is often dependent on the number of trees present, which is highly correlated with plot area. This research examines the...

  4. Selecting the optimum plot size for a California design-based stream and wetland mapping program.

    Science.gov (United States)

    Lackey, Leila G; Stein, Eric D

    2014-04-01

    Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.

  5. Assessment of errors associated with plot size and lateral movement of nitrogen-15 when studying fertilizer recovery under field conditions

    International Nuclear Information System (INIS)

    Sanchez, C.A.; Blackmer, A.M.; Horton, R.; Timmons, D.R.

    1987-01-01

    The high cost of 15 N-labeled fertilizers encourages the use of field plots having minimum size. If plot size is reduced too much, however, lateral movement of N near the plots by mass flow or diffusion within the soil or by translocation through plant roots can become a significant source of error in determinations of fertilizer N recovery. This study was initiated to assess the importance of lateral movement of labeled fertilizer when unconfined plots are used to determine recovery of fertilizer. Corn grain samples were collected at various positions inside and outside 15 N plots, and the 15 N contents of these samples were determined. The data were fit to mathematical models to estimate the extent to which lateral movement of fertilizer N caused errors in determined values of fertilizer recovery for the first, second, and third crops following fertilization. These models also were used to predict the plot size needed for similar 15 N-tracer studies in the future. The results of these studies indicate that 15 N plots having a size of 2 by 2 m are sufficiently large for determining recovery of fertilizer N for corn crops under most conditions. Where lateral movement of fertilizer N in soils is suspected to be a problem, we recommend collection of a few plant samples outside the 15 N plots as insurance against misleading conclusions concerning fertilizer N recovery

  6. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  7. Nonlinear Dot Plots.

    Science.gov (United States)

    Rodrigues, Nils; Weiskopf, Daniel

    2018-01-01

    Conventional dot plots use a constant dot size and are typically applied to show the frequency distribution of small data sets. Unfortunately, they are not designed for a high dynamic range of frequencies. We address this problem by introducing nonlinear dot plots. Adopting the idea of nonlinear scaling from logarithmic bar charts, our plots allow for dots of varying size so that columns with a large number of samples are reduced in height. For the construction of these diagrams, we introduce an efficient two-way sweep algorithm that leads to a dense and symmetrical layout. We compensate aliasing artifacts at high dot densities by a specifically designed low-pass filtering method. Examples of nonlinear dot plots are compared to conventional dot plots as well as linear and logarithmic histograms. Finally, we include feedback from an expert review.

  8. Visual search for tropical web spiders: the influence of plot length, sampling effort, and phase of the day on species richness.

    Science.gov (United States)

    Pinto-Leite, C M; Rocha, P L B

    2012-12-01

    Empirical studies using visual search methods to investigate spider communities were conducted with different sampling protocols, including a variety of plot sizes, sampling efforts, and diurnal periods for sampling. We sampled 11 plots ranging in size from 5 by 10 m to 5 by 60 m. In each plot, we computed the total number of species detected every 10 min during 1 hr during the daytime and during the nighttime (0630 hours to 1100 hours, both a.m. and p.m.). We measured the influence of time effort on the measurement of species richness by comparing the curves produced by sample-based rarefaction and species richness estimation (first-order jackknife). We used a general linear model with repeated measures to assess whether the phase of the day during which sampling occurred and the differences in the plot lengths influenced the number of species observed and the number of species estimated. To measure the differences in species composition between the phases of the day, we used a multiresponse permutation procedure and a graphical representation based on nonmetric multidimensional scaling. After 50 min of sampling, we noted a decreased rate of species accumulation and a tendency of the estimated richness curves to reach an asymptote. We did not detect an effect of plot size on the number of species sampled. However, differences in observed species richness and species composition were found between phases of the day. Based on these results, we propose guidelines for visual search for tropical web spiders.

  9. Influence of Plot Size on Efficiency of Biomass Estimates in Inventories of Dry Tropical Forests Assisted by Photogrammetric Data from an Unmanned Aircraft System

    Directory of Open Access Journals (Sweden)

    Daud Jones Kachamba

    2017-06-01

    Full Text Available Applications of unmanned aircraft systems (UASs to assist in forest inventories have provided promising results in biomass estimation for different forest types. Recent studies demonstrating use of different types of remotely sensed data to assist in biomass estimation have shown that accuracy and precision of estimates are influenced by the size of field sample plots used to obtain reference values for biomass. The objective of this case study was to assess the influence of sample plot size on efficiency of UAS-assisted biomass estimates in the dry tropical miombo woodlands of Malawi. The results of a design-based field sample inventory assisted by three-dimensional point clouds obtained from aerial imagery acquired with a UAS showed that the root mean square errors as well as the standard error estimates of mean biomass decreased as sample plot sizes increased. Furthermore, relative efficiency values over different sample plot sizes were above 1.0 in a design-based and model-assisted inferential framework, indicating that UAS-assisted inventories were more efficient than purely field-based inventories. The results on relative costs for UAS-assisted and pure field-based sample plot inventories revealed that there is a trade-off between inventory costs and required precision. For example, in our study if a standard error of less than approximately 3 Mg ha−1 was targeted, then a UAS-assisted forest inventory should be applied to ensure more cost effective and precise estimates. Future studies should therefore focus on finding optimum plot sizes for particular applications, like for example in projects under the Reducing Emissions from Deforestation and Forest Degradation, plus forest conservation, sustainable management of forest and enhancement of carbon stocks (REDD+ mechanism with different geographical scales.

  10. Considerations in Forest Growth Estimation Between Two Measurements of Mapped Forest Inventory Plots

    Science.gov (United States)

    Michael T. Thompson

    2006-01-01

    Several aspects of the enhanced Forest Inventory and Analysis (FIA) program?s national plot design complicate change estimation. The design incorporates up to three separate plot sizes (microplot, subplot, and macroplot) to sample trees of different sizes. Because multiple plot sizes are involved, change estimators designed for polyareal plot sampling, such as those...

  11. Bridging scale gaps between regional maps of forest aboveground biomass and field sampling plots using TanDEM-X data

    Science.gov (United States)

    Ni, W.; Zhang, Z.; Sun, G.

    2017-12-01

    Several large-scale maps of forest AGB have been released [1] [2] [3]. However, these existing global or regional datasets were only approximations based on combining land cover type and representative values instead of measurements of actual forest aboveground biomass or forest heights [4]. Rodríguez-Veiga et al[5] reported obvious discrepancies of existing forest biomass stock maps with in-situ observations in Mexico. One of the biggest challenges to the credibility of these maps comes from the scale gaps between the size of field sampling plots used to develop(or validate) estimation models and the pixel size of these maps and the availability of field sampling plots with sufficient size for the verification of these products [6]. It is time-consuming and labor-intensive to collect sufficient number of field sampling data over the plot size of the same as resolutions of regional maps. The smaller field sampling plots cannot fully represent the spatial heterogeneity of forest stands as shown in Figure 1. Forest AGB is directly determined by forest heights, diameter at breast height (DBH) of each tree, forest density and tree species. What measured in the field sampling are the geometrical characteristics of forest stands including the DBH, tree heights and forest densities. The LiDAR data is considered as the best dataset for the estimation of forest AGB. The main reason is that LiDAR can directly capture geometrical features of forest stands by its range detection capabilities.The remotely sensed dataset, which is capable of direct measurements of forest spatial structures, may serve as a ladder to bridge the scale gaps between the pixel size of regional maps of forest AGB and field sampling plots. Several researches report that TanDEM-X data can be used to characterize the forest spatial structures [7, 8]. In this study, the forest AGB map of northeast China were produced using ALOS/PALSAR data taking TanDEM-X data as a bridges. The TanDEM-X InSAR data used in

  12. Experimental strategies in carrying out VCU for tobacco crop I: plot design and size.

    Science.gov (United States)

    Toledo, F H R B; Ramalho, M A P; Pulcinelli, C E; Bruzi, A T

    2013-09-19

    We aimed to establish standards for tobacco Valor de Cultivo e Uso (VCU) in Brazil. We obtained information regarding the size and design of plots of two varietal groups of tobacco (Virginia and Burley). Ten inbred lines of each varietal group were evaluated in a randomized complete block design with four replications. The plot contained 42 plants with six rows of seven columns each. For each experiment plant, considering the position of the respective plant in the plot (row and column) as a reference, cured leaf weight (g/plant), total sugar content (%), and total alkaloid content (%) were determined. The maximum curvature of the variations in coefficients was estimated. Trials with the number of plants per plot ranging from 2 to 41 were simulated. The use of a border was not justified because the interactions between inbred lines x position in the plots were never significant, showing that the behavior of the inbred lines coincided with the different positions. The plant performance varied according to the column position in the plot. To lessen the effect of this factor, the use of plots with more than one row is recommended. Experimental precision, evaluated by the CV%, increased with an increase in plot size; nevertheless, the maximum curvature of the variation coefficient method showed no expressive increase in precision if the number of plants was greater than seven. The result in identification of the best inbred line, in terms of the size of each plot, coincided with the maximum curvature method.

  13. [Effects of sampling plot number on tree species distribution prediction under climate change].

    Science.gov (United States)

    Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu

    2013-05-01

    Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.

  14. Standardized mean differences cause funnel plot distortion in publication bias assessments.

    Science.gov (United States)

    Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E

    2017-09-08

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.

  15. Optimal plot size in the evaluation of papaya scions: proposal and comparison of methods

    Directory of Open Access Journals (Sweden)

    Humberto Felipe Celanti

    Full Text Available ABSTRACT Evaluating the quality of scions is extremely important and it can be done by characteristics of shoots and roots. This experiment evaluated height of the aerial part, stem diameter, number of leaves, petiole length and length of roots of papaya seedlings. Analyses were performed from a blank trial with 240 seedlings of "Golden Pecíolo Curto". The determination of the optimum plot size was done by applying the methods of maximum curvature, maximum curvature of coefficient of variation and a new proposed method, which incorporates the bootstrap resampling simulation to the maximum curvature method. According to the results obtained, five is the optimal number of seedlings of papaya "Golden Pecíolo Curto" per plot. The proposed method of bootstrap simulation with replacement provides optimal plot sizes equal or higher than the maximum curvature method and provides same plot size than maximum curvature method of the coefficient of variation.

  16. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  17. Effects of field plot size on prediction accuracy of aboveground biomass in airborne laser scanning-assisted inventories in tropical rain forests of Tanzania.

    Science.gov (United States)

    Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik

    2015-12-01

    Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.

  18. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    Science.gov (United States)

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  19. Productive variability, border use and plot size in trials with cherry tomato

    Directory of Open Access Journals (Sweden)

    Daniel Santos

    2018-02-01

    Full Text Available ABSTRACT: Knowing the productive variability within protected environments is crucial for choosing the experimental design to be used in that conditions. Thus, the aim of the present study was to assess the variability of fruit production in protected environment cultivated with cherry tomatoes and to verify the border effect and plot size in reducing this variability. To this, data from an uniformity test carried out in a greenhouse with cherry tomato cv. ‘Lili’ were used. Total fresh mass of fruits per plant was considered being these plants arranged in cropping rows parallel to the lateral openings of the greenhouse and also the same plants arranged in columns perpendicular to these openings. To generate the borders, different scenarios were designed by excluding rows and columns and using different plot sizes. In each scenario, homogeneity of variances among the remaining rows and columns was tested. There is no variability of fruit production among rows or columns in trials with cherry tomatoes carried out in greenhouses and the use of border does not bring benefits in terms of reduction of coefficient of variation or reduction of cases of variance heterogeneity among rows or columns. Plots with a size equal to or greater than two plants make possible to use the completely randomized design in the cherry tomato trials in greenhouses.

  20. Analysis of YBCO high temperature superconductor doped with silver nanoparticles and carbon nanotubes using Williamson-Hall and size-strain plot

    Science.gov (United States)

    Dadras, Sedigheh; Davoudiniya, Masoumeh

    2018-05-01

    This paper sets out to investigate and compare the effects of Ag nanoparticles and carbon nanotubes (CNTs) doping on the mechanical properties of Y1Ba2Cu3O7-δ (YBCO) high temperature superconductor. For this purpose, the pure and doped YBCO samples were synthesized by sol-gel method. The microstructural analysis of the samples is performed using X-ray diffraction (XRD). The crystalline size, lattice strain and stress of the pure and doped YBCO samples were estimated by modified forms of Williamson-Hall analysis (W-H), namely, uniform deformation model (UDM), uniform deformation stress model (UDSM) and the size-strain plot method (SSP). These results show that the crystalline size, lattice strain and stress of the YBCO samples declined by Ag nanoparticles and CNTs doping.

  1. Size Effect of the 2-D Bodies on the Geothermal Gradient and Q-A Plot

    Science.gov (United States)

    Thakur, M.; Blackwell, D. D.

    2009-12-01

    Using numerical models we have investigated some of the criticisms on the Q-A plot of related to the effect of size of the body on the slope and reduced heat flow. The effects of horizontal conduction depend on the relative difference of radioactivity between the body and the country rock (assuming constant thermal conductivity). Horizontal heat transfer due to different 2-D bodies was numerically studied in order to quantify resulting temperature differences at the Moho and errors on the predication of Qr (reduced heat flow). Using the two end member distributions of radioactivity, the step model (thickness 10km) and exponential model, different 2-D models of horizontal scale (width) ranging from 10 -500 km were investigated. Increasing the horizontal size of the body tends to move observations closer towards the 1-D solution. A temperature difference of 50 oC is produced (for the step model) at Moho between models of width 10 km versus 500 km. In other words the 1-D solution effectively provides large scale averaging in terms of heat flow and temperature field in the lithosphere. For bodies’ ≤ 100 km wide the geotherms at shallower levels are affected, but at depth they converge and are 50 oC lower than that of the infinite plate model temperature. In case of 2-D bodies surface heat flow is decreased due to horizontal transfer of heat, which will shift the Q-A point vertically downward on the Q-A plot. The smaller the size of the body, the more will be the deviation from the 1-D solution and the more will be the movement of Q-A point downwards on a Q-A plot. On the Q-A plot, a limited points of bodies of different sizes with different radioactivity contrast (for the step and exponential model), exactly reproduce the reduced heat flow Qr. Thus the size of the body can affect the slope on a Q-A plot but Qr is not changed. Therefore, Qr ~ 32 mWm-2 obtained from the global terrain average Q-A plot represents the best estimate of stable continental mantle heat

  2. Plot-size for 15N-fertilizer recovery studies by tanzania-grass

    International Nuclear Information System (INIS)

    Martha Junior, Geraldo Bueno; Trivelin, Paulo Cesar Ocheuze; Corsi, Moacyr

    2009-01-01

    The understanding of the N dynamics in pasture ecosystems can be improved by studies using the 15 N tracer technique. However, in these experiments it must be ensured that the lateral movement of the labeled fertilizer does not interfere with the results. In this study the plot-size requirements for 15 N-fertilizer recovery experiments with irrigated Panicum maximum cv. Tanzania was determined. Three grazing intensities (light, moderate and intensive grazing) in the winter, spring and summer seasons were considered. A 1 m 2 plot-size, with a grass tussock in the center, was adequate, irrespective of the grazing intensity or season of the year. Increasing the distance from the area fertilized with 15 N negatively affected the N derived from fertilizer (Npfm) recovered in herbage.The lowest decline in Npfm values were observed for moderate and light grazing intensities. This fact might be explained by the vigorous growth characteristics of these plants. Increasing the grazing intensity decreased the tussock mass and, the smaller the tussock mass, the greater was the dependence on fertilizer nitrogen. (author)

  3. Strain and grain size of TiO2 nanoparticles from TEM, Raman spectroscopy and XRD: The revisiting of the Williamson-Hall plot method

    Science.gov (United States)

    Kibasomba, Pierre M.; Dhlamini, Simon; Maaza, Malik; Liu, Chuan-Pu; Rashad, Mohamed M.; Rayan, Diaa A.; Mwakikunga, Bonex W.

    2018-06-01

    The Williamson-Hall (W-H) equation, which has been used to obtain relative crystallite sizes and strains between samples since 1962, is revisited. A modified W-H equation is derived which takes into account the Scherrer equation, first published in 1918, (which traditionally gives more absolute crystallite size prediction) and strain prediction from Raman spectra. It is found that W-H crystallite sizes are on average 2.11 ± 0.01 times smaller than the sizes from Scherrer equation. Furthermore the strain from the W-H plots when compared to strain obtained from Raman spectral red-shifts yield factors whose values depend on the phases in the materials - whether anatase, rutile or brookite. Two main phases are identified in the annealing temperatures (350 °C-700 °C) chosen herein - anatase and brookite. A transition temperature of 550 °C has been found for nano-TiO2 to irreversibly transform from brookite to anatase by plotting the Raman peak shifts against the annealing temperatures. The W-H underestimation on the strain in the brookite phase gives W-H/Raman factor of 3.10 ± 0.05 whereas for the anatase phase, one gets 2.46 ± 0.03. The new βtot2cos2θ-sinθ plot and when fitted with a polynomial yield less strain but much better matching with experimental TEM crystallite sizes and the agglomerates than both the traditional Williamson-Hall and the Scherrer methods. There is greater improvement in the model when linearized - that is the βtotcos2θ-sinθ plot rather than the βtot2cos2θ-sinθ plot.

  4. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  5. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  6. Optimizing variable radius plot size and LiDAR resolution to model standing volume in conifer forests

    Science.gov (United States)

    Ram Kumar Deo; Robert E. Froese; Michael J. Falkowski; Andrew T. Hudak

    2016-01-01

    The conventional approach to LiDAR-based forest inventory modeling depends on field sample data from fixed-radius plots (FRP). Because FRP sampling is cost intensive, combining variable-radius plot (VRP) sampling and LiDAR data has the potential to improve inventory efficiency. The overarching goal of this study was to evaluate the integration of LiDAR and VRP data....

  7. 36 CFR 9.42 - Well records and reports, plots and maps, samples, tests and surveys.

    Science.gov (United States)

    2010-07-01

    ... Well records and reports, plots and maps, samples, tests and surveys. Any technical data gathered... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Well records and reports, plots and maps, samples, tests and surveys. 9.42 Section 9.42 Parks, Forests, and Public Property...

  8. Estimation of lattice strain in nanocrystalline RuO2 by Williamson-Hall and size-strain plot methods

    Science.gov (United States)

    Sivakami, R.; Dhanuskodi, S.; Karvembu, R.

    2016-01-01

    RuO2 nanoparticles (RuO2 NPs) have been successfully synthesized by the hydrothermal method. Structure and the particle size have been determined by X-ray diffraction (XRD), scanning electron microscopy (SEM), atomic force microscopy (AFM) and transmission electron microscopy (TEM). UV-Vis spectra reveal that the optical band gap of RuO2 nanoparticles is red shifted from 3.95 to 3.55 eV. BET measurements show a high specific surface area (SSA) of 118-133 m2/g and pore diameter (10-25 nm) has been estimated by Barret-Joyner-Halenda (BJH) method. The crystallite size and lattice strain in the samples have been investigated by Williamson-Hall (W-H) analysis assuming uniform deformation, deformation stress and deformation energy density, and the size-strain plot method. All other relevant physical parameters including stress, strain and energy density have been calculated. The average crystallite size and the lattice strain evaluated from XRD measurements are in good agreement with the results of TEM.

  9. [Comparative quality measurements part 3: funnel plots].

    Science.gov (United States)

    Kottner, Jan; Lahmann, Nils

    2014-02-01

    Comparative quality measurements between organisations or institutions are common. Quality measures need to be standardised and risk adjusted. Random error must also be taken adequately into account. Rankings without consideration of the precision lead to flawed interpretations and enhances "gaming". Application of confidence intervals is one possibility to take chance variation into account. Funnel plots are modified control charts based on Statistical Process Control (SPC) theory. The quality measures are plotted against their sample size. Warning and control limits that are 2 or 3 standard deviations from the center line are added. With increasing group size the precision increases and so the control limits are forming a funnel. Data points within the control limits are considered to show common cause variation; data points outside special cause variation without the focus of spurious rankings. Funnel plots offer data based information about how to evaluate institutional performance within quality management contexts.

  10. Quality control of measurements made on fixed-area sample plots

    Science.gov (United States)

    Ola Lindgren

    2000-01-01

    The paper describes results from a large program for quality control of forest measurements. The performance of 87 surveyors was evaluated. Tree heights were usually measured well, whereas the counting of tree-rings on increment cores was a source of considerable bias for many surveyors. During tree count on sample plots, many surveyors had a tendency to forget trees,...

  11. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  12. Estimation of lattice strain in nanocrystalline RuO2 by Williamson-Hall and size-strain plot methods.

    Science.gov (United States)

    Sivakami, R; Dhanuskodi, S; Karvembu, R

    2016-01-05

    RuO2 nanoparticles (RuO2 NPs) have been successfully synthesized by the hydrothermal method. Structure and the particle size have been determined by X-ray diffraction (XRD), scanning electron microscopy (SEM), atomic force microscopy (AFM) and transmission electron microscopy (TEM). UV-Vis spectra reveal that the optical band gap of RuO2 nanoparticles is red shifted from 3.95 to 3.55eV. BET measurements show a high specific surface area (SSA) of 118-133m(2)/g and pore diameter (10-25nm) has been estimated by Barret-Joyner-Halenda (BJH) method. The crystallite size and lattice strain in the samples have been investigated by Williamson-Hall (W-H) analysis assuming uniform deformation, deformation stress and deformation energy density, and the size-strain plot method. All other relevant physical parameters including stress, strain and energy density have been calculated. The average crystallite size and the lattice strain evaluated from XRD measurements are in good agreement with the results of TEM. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Point and Fixed Plot Sampling Inventory Estimates at the Savannah River Site, South Carolina.

    Energy Technology Data Exchange (ETDEWEB)

    Parresol, Bernard, R.

    2004-02-01

    This report provides calculation of systematic point sampling volume estimates for trees greater than or equal to 5 inches diameter breast height (dbh) and fixed radius plot volume estimates for trees < 5 inches dbh at the Savannah River Site (SRS), Aiken County, South Carolina. The inventory of 622 plots was started in March 1999 and completed in January 2002 (Figure 1). Estimates are given in cubic foot volume. The analyses are presented in a series of Tables and Figures. In addition, a preliminary analysis of fuel levels on the SRS is given, based on depth measurements of the duff and litter layers on the 622 inventory plots plus line transect samples of down coarse woody material. Potential standing live fuels are also included. The fuels analyses are presented in a series of tables.

  14. Plot size recommendations for biomass estimation in a midwestern old-growth forest

    Science.gov (United States)

    Martin A. Spetich; George R Parker

    1998-01-01

    The authors examine the relationship between disturbance regime and plot size for woody biomass estimation in a midwestern old-growth deciduous forest from 1926 to 1992. Analysis was done on the core 19.6 ac of a 50.1 ac forest in which every tree 4 in. d.b.h. and greater has been tagged and mapped since 1926. Five windows of time are compared—1926, 1976, 1981, 1986...

  15. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  16. Determination of cluster size of Pratylenchus Penetrans ...

    African Journals Online (AJOL)

    A nursery field 21 m x 80 m was sampled sequentially for Pratylenchus penetrans by decreasing the plot sizes systematically. Plots sizes of 3.6 m x 8 m, 3.6 m x 3.6 m and 0.6 m x 0.6 m were sampled. Nematode counts were computed to obtain the respective sample mean and variance. The sample mean and variance ...

  17. Plano amostral em parcelas de milho para avaliação de atributos de espigas Sampling plan in corn plots to evaluate ear characteristics

    Directory of Open Access Journals (Sweden)

    Thomas Newton Martin

    2005-12-01

    in the experimental area of the Crop Science Department of the Federal University of Santa Maria (UFSM, Brazil. The main plot was divided in 96 sub plots with one meter of length. In each plot, the spikes of five sub-plots were sampled and length, diameter, grain yield and the mass of one hundred grains of each spike was measured or weighted. It was concluded that the sample size within plots is related to the characteristic to be evaluated in the maize spikes and there are genetic and environment factors that interfere on estimation of sample size of maize spikes. A sample of 24 spikes, one for each split-plot, and six replications for each genotype, results in a considered appropriate precision (less than 10% from the mean for length and width of spikes, grain yield and mass of one hundred grains.

  18. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  19. Variability of Measured Runoff and Soil Loss from Field Plots

    Directory of Open Access Journals (Sweden)

    F. Asadzadeh

    2016-02-01

    Full Text Available Introduction: Field plots are widely used in studies related to the measurements of soil loss and modeling of erosion processes. Research efforts are needed to investigate factors affecting the data quality of plots. Spatial scale or size of plots is one of these factors which directly affects measuring runoff and soil loss by means of field plots. The effect of plot size on measured runoff or soil loss from natural plots is known as plot scale effect. On the other hand, variability of runoff and sediment yield from replicated filed plots is a main source of uncertainty in measurement of erosion from plots which should be considered in plot data interpretation processes. Therefore, there is a demand for knowledge of soil erosion processes occurring in plots of different sizes and of factors that determine natural variability, as a basis for obtaining soil loss data of good quality. This study was carried out to investigate the combined effects of these two factors by measurement of runoff and soil loss from replicated plots with different sizes. Materials and Methods: In order to evaluate the variability of runoff and soil loss data seven plots, differing in width and length, were constructed in a uniform slope of 9% at three replicates at Koohin Research Station in Qazvin province. The plots were ploughed up to down slope in September 2011. Each plot was isolated using soil beds with a height of 30 cm, to direct generated surface runoff to the lower part of the plots. Runoff collecting systems composed of gutters, pipes and tankswere installed at the end of each plot. During the two-year study period of 2011-2012, plots were maintained in bare conditions and runoff and soil loss were measured for each single event. Precipitation amounts and characteristics were directly measured by an automatic recording tipping-bucket rain gauge located about 200 m from the experimental plots. The entire runoff volume including eroded sediment was measured on

  20. SinaPlot: an enhanced chart for simple and truthful representation of single observations over multiple classes

    DEFF Research Database (Denmark)

    Sidiropoulos, Nikos; Sohi, Sina Hadi; Pedersen, Thomas Lin

    2017-01-01

    the representation of data sets with differing sample size we have developed a new type of plot overcoming limitations of current standard visualization charts. SinaPlot is inspired by the strip chart and the violin plot and operates by letting the normalized density of points restrict the jitter along the x...

  1. Isocratic and gradient impedance plot analysis and comparison of some recently introduced large size core-shell and fully porous particles.

    Science.gov (United States)

    Vanderheyden, Yoachim; Cabooter, Deirdre; Desmet, Gert; Broeckhoven, Ken

    2013-10-18

    The intrinsic kinetic performance of three recently commercialized large size (≥4μm) core-shell particles packed in columns with different lengths has been measured and compared with that of standard fully porous particles of similar and smaller size (5 and 3.5μm, respectively). The kinetic performance is compared in both absolute (plot of t0 versus the plate count N or the peak capacity np for isocratic and gradient elution, respectively) and dimensionless units. The latter is realized by switching to so-called impedance plots, a format which has been previously introduced (as a plot of t0/N(2) or E0 versus Nopt/N) and has in the present study been extended from isocratic to gradient elution (where the impedance plot corresponds to a plot of t0/np(4) versus np,opt(2)/np(2)). Both the isocratic and gradient impedance plot yielded a very similar picture: the clustered impedance plot curves divide into two distinct groups, one for the core-shell particles (lowest values, i.e. best performance) and one for the fully porous particles (highest values), confirming the clear intrinsic kinetic advantage of core-shell particles. If used around their optimal flow rate, the core-shell particles displayed a minimal separation impedance that is about 40% lower than the fully porous particles. Even larger gains in separation speed can be achieved in the C-term regime. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  3. Generalized Probability-Probability Plots

    NARCIS (Netherlands)

    Mushkudiani, N.A.; Einmahl, J.H.J.

    2004-01-01

    We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

  4. The albatross plot: A novel graphical tool for presenting results of diversely reported studies in a systematic review.

    Science.gov (United States)

    Harrison, Sean; Jones, Hayley E; Martin, Richard M; Lewis, Sarah J; Higgins, Julian P T

    2017-09-01

    Meta-analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1-sided P value and a total sample size from each study (or equivalently a 2-sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta-analyses, allowing for comparison of results, and an example from when a meta-analysis was not possible. Copyright © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.

  5. Density Distribution Sunflower Plots

    Directory of Open Access Journals (Sweden)

    William D. Dupont

    2003-01-01

    Full Text Available Density distribution sunflower plots are used to display high-density bivariate data. They are useful for data where a conventional scatter plot is difficult to read due to overstriking of the plot symbol. The x-y plane is subdivided into a lattice of regular hexagonal bins of width w specified by the user. The user also specifies the values of l, d, and k that affect the plot as follows. Individual observations are plotted when there are less than l observations per bin as in a conventional scatter plot. Each bin with from l to d observations contains a light sunflower. Other bins contain a dark sunflower. In a light sunflower each petal represents one observation. In a dark sunflower, each petal represents k observations. (A dark sunflower with p petals represents between /2-pk k and /2+pk k observations. The user can control the sizes and colors of the sunflowers. By selecting appropriate colors and sizes for the light and dark sunflowers, plots can be obtained that give both the overall sense of the data density distribution as well as the number of data points in any given region. The use of this graphic is illustrated with data from the Framingham Heart Study. A documented Stata program, called sunflower, is available to draw these graphs. It can be downloaded from the Statistical Software Components archive at http://ideas.repec.org/c/boc/bocode/s430201.html . (Journal of Statistical Software 2003; 8 (3: 1-5. Posted at http://www.jstatsoft.org/index.php?vol=8 .

  6. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  7. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  8. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    Science.gov (United States)

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  9. Testing aggregation hypotheses among Neotropical trees and shrubs: results from a 50-ha plot over 20 years of sampling.

    Science.gov (United States)

    Myster, Randall W; Malahy, Michael P

    2012-09-01

    Spatial patterns of tropical trees and shrubs are important to understanding their interaction and the resultant structure of tropical rainforests. To assess this issue, we took advantage of previously collected data, on Neotropical tree and shrub stem identified to species and mapped for spatial coordinates in a 50ha plot, with a frequency of every five years and over a 20 year period. These stems data were first placed into four groups, regardless of species, depending on their location in the vertical strata of the rainforest (shrubs, understory trees, mid-sized trees, tall trees) and then used to generate aggregation patterns for each sampling year. We found shrubs and understory trees clumped at small spatial scales of a few meters for several of the years sampled. Alternatively, mid-sized trees and tall trees did not clump, nor did they show uniform (regular) patterns, during any sampling period. In general (1) groups found higher in the canopy did not show aggregation on the ground and (2) the spatial patterns of all four groups showed similarity among different sampling years, thereby supporting a "shifting mosaic" view of plant communities over large areas. Spatial analysis, such as this one, are critical to understanding and predicting tree spaces, tree-tree replacements and the Neotropical forest patterns, such as biodiversity and those needed for sustainability efforts, they produce.

  10. IPLOT, interactive MELCOR data plotting system

    International Nuclear Information System (INIS)

    2008-01-01

    1 - Description of program or function: IPLOT is an interactive MELCOR data plotting system. It provides several kinds of GUI interfaces for a flexible data plotting. IPLOT capabilities include creation, saving and loading of user specified MELCOR variables trend graphs. IPLOT can use one or several plot files for a graph generation while the graphs can be either in one window or in several windows. Besides IPLOT provides several graph convenient functions such as zooming, re-sizing, printing for a detail analysis of severe accidents. 2 - Methods: Trend values seeking in a plot file is performed by a binary search method for fast performance. 3 - Restrictions on the complexity of the problem: MELCOR plot files are required for plotting

  11. SEGY to ASCII Conversion and Plotting Program 2.0

    Science.gov (United States)

    Goldman, Mark R.

    2005-01-01

    INTRODUCTION SEGY has long been a standard format for storing seismic data and header information. Almost every seismic processing package can read and write seismic data in SEGY format. In the data processing world, however, ASCII format is the 'universal' standard format. Very few general-purpose plotting or computation programs will accept data in SEGY format. The software presented in this report, referred to as SEGY to ASCII (SAC), converts seismic data written in SEGY format (Barry et al., 1975) to an ASCII data file, and then creates a postscript file of the seismic data using a general plotting package (GMT, Wessel and Smith, 1995). The resulting postscript file may be plotted by any standard postscript plotting program. There are two versions of SAC: one version for plotting a SEGY file that contains a single gather, such as a stacked CDP or migrated section, and a second version for plotting multiple gathers from a SEGY file containing more than one gather, such as a collection of shot gathers. Note that if a SEGY file has multiple gathers, then each gather must have the same number of traces per gather, and each trace must have the same sample interval and number of samples per trace. SAC will read several common standards of SEGY data, including SEGY files with sample values written in either IBM or IEEE floating-point format. In addition, utility programs are present to convert non-standard Seismic Unix (.sux) SEGY files and PASSCAL (.rsy) SEGY files to standard SEGY files. SAC allows complete user control over all plotting parameters including label size and font, tick mark intervals, trace scaling, and the inclusion of a title and descriptive text. SAC shell scripts create a postscript image of the seismic data in vector rather than bitmap format, using GMT's pswiggle command. Although this can produce a very large postscript file, the image quality is generally superior to that of a bitmap image, and commercial programs such as Adobe Illustrator

  12. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  13. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  14. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  15. Estimation of optimum plot sizes in field experiments with annatto Estimativa de tamanho ótimo de parcelas em experimentos com urucum

    Directory of Open Access Journals (Sweden)

    Anselmo Eloy Silveira Viana

    2002-08-01

    Full Text Available The objective of this study was to estimate the optimum plots size for experiments with annatto. The uniformity assay consisted of 12 rows with 12 plants in each row. The variety Bico de Pato was used, planted in 5 x 4 m spacing and evaluated at 5 years of age. Three methods were used: maximum curvature, modified maximum curvature and the comparison of variances. The plot size estimate varied according to the methodology used and the characteristic analyzed. The adequate plot size was found to be 107.2 m² (5 plants using the modified maximum curvature method, which resulted in more precise estimates , taking into consideration that the ideal plot should facilitate the efficient evaluation of all characteristics analyzed in this experiment.Objetivou-se estimar o tamanho ótimo de parcelas para experimentos com urucum. O ensaio de uniformidade foi formado de 12 fileiras com 12 plantas em cada fileira. Utilizou-se a variedade Bico de Pato, em espaçamento 5 x 4 m, avaliada aos cinco anos. Empregaram-se os métodos da máxima curvatura, da máxima curvatura modificado e da comparação de variâncias. A estimativa do tamanho de parcela variou com o método utilizado e com a característica analisada. Pelo método da máxima curvatura modificado, que permitiu a obtenção de estimativas mais precisas, e considerando-se que a parcela ideal deve possibilitar a avaliação eficiente de todas as características analisadas neste experimento, encontrou-se o tamanho adequado de parcela, 107,2 m² (cinco plantas.

  16. Contribution to the sample mean plot for graphical and numerical sensitivity analysis

    International Nuclear Information System (INIS)

    Bolado-Lavin, R.; Castaings, W.; Tarantola, S.

    2009-01-01

    The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling

  17. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  18. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  19. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  20. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  1. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  2. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  3. Sensitivity analysis using contribution to sample variance plot: Application to a water hammer model

    International Nuclear Information System (INIS)

    Tarantola, S.; Kopustinskas, V.; Bolado-Lavin, R.; Kaliatka, A.; Ušpuras, E.; Vaišnoras, M.

    2012-01-01

    This paper presents “contribution to sample variance plot”, a natural extension of the “contribution to the sample mean plot”, which is a graphical tool for global sensitivity analysis originally proposed by Sinclair. These graphical tools have a great potential to display graphically sensitivity information given a generic input sample and its related model realizations. The contribution to the sample variance can be obtained at no extra computational cost, i.e. from the same points used for deriving the contribution to the sample mean and/or scatter-plots. The proposed approach effectively instructs the analyst on how to achieve a targeted reduction of the variance, by operating on the extremes of the input parameters' ranges. The approach is tested against a known benchmark for sensitivity studies, the Ishigami test function, and a numerical model simulating the behaviour of a water hammer effect in a piping system.

  4. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  5. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  6. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  7. BliP PLOT : PLOT DISTRIBUSI DATA BERDIMENSI - SATU

    OpenAIRE

    Anisa Anisa; Indwiati Indwiati

    2014-01-01

    Blip Plot, adalah salah satu plot yang sibuat untuk menampilkan data berdimensi-satu. Pada dasarnya plot ini terdiri dari kotak, garis, dan titik. Sebagaimana plot distribusi berdimensi-satu yang lain, BliP Plot menampilkan nilai-nilai data individu dalam titik-titik atau garis-garis, dan informasi berkelompok dalam garis atau kotak. Kelebihannya, Blip Plot menampilkan banyak keistimewaan baru seperti plot variable-widht dan beberapa pilihan pola titik. Keuntungan utama dari Blip ...

  8. Problems with sampling desert tortoises: A simulation analysis based on field data

    Science.gov (United States)

    Freilich, J.E.; Camp, R.J.; Duda, J.J.; Karl, A.E.

    2005-01-01

    The desert tortoise (Gopherus agassizii) was listed as a U.S. threatened species in 1990 based largely on population declines inferred from mark-recapture surveys of 2.59-km2 (1-mi2) plots. Since then, several census methods have been proposed and tested, but all methods still pose logistical or statistical difficulties. We conducted computer simulations using actual tortoise location data from 2 1-mi2 plot surveys in southern California, USA, to identify strengths and weaknesses of current sampling strategies. We considered tortoise population estimates based on these plots as "truth" and then tested various sampling methods based on sampling smaller plots or transect lines passing through the mile squares. Data were analyzed using Schnabel's mark-recapture estimate and program CAPTURE. Experimental subsampling with replacement of the 1-mi2 data using 1-km2 and 0.25-km2 plot boundaries produced data sets of smaller plot sizes, which we compared to estimates from the 1-mi 2 plots. We also tested distance sampling by saturating a 1-mi 2 site with computer simulated transect lines, once again evaluating bias in density estimates. Subsampling estimates from 1-km2 plots did not differ significantly from the estimates derived at 1-mi2. The 0.25-km2 subsamples significantly overestimated population sizes, chiefly because too few recaptures were made. Distance sampling simulations were biased 80% of the time and had high coefficient of variation to density ratios. Furthermore, a prospective power analysis suggested limited ability to detect population declines as high as 50%. We concluded that poor performance and bias of both sampling procedures was driven by insufficient sample size, suggesting that all efforts must be directed to increasing numbers found in order to produce reliable results. Our results suggest that present methods may not be capable of accurately estimating desert tortoise populations.

  9. Longleaf pine regeneration following Hurricane Ivan utilizing the RLGS plots

    Science.gov (United States)

    John C. Gilbert; John S. Kush

    2013-01-01

    On September 16, 2004, Hurricane Ivan hit the Alabama coast and severely impacted numerous plots in the U.S. Forest Service’s Regional Longleaf Growth Study (RLGS). The Escambia Experimental Forest (EEF) has 201 of the 325 RLGS plots. Nearly one-third of the EEF was impacted. Nine plots with pole-sized trees were entirely lost. Another 54 plots had some type of damage...

  10. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  11. The pros and cons of funnel plots as an aid to risk communication and patient decision making.

    Science.gov (United States)

    Rakow, Tim; Wright, Rebecca J; Spiegelhalter, David J; Bull, Catherine

    2015-05-01

    Funnel plots, which simultaneously display a sample statistic and the corresponding sample size for multiple cases, have a range of applications. In medicine, they are used to display treatment outcome rates and caseload volume by institution, which can inform strategic decisions about health care delivery. We investigated lay people's understanding of such plots and explored their suitability as an aid to individual treatment decisions. In two studies, 172 participants answered objective questions about funnel plots representing the surgical outcomes (survival or mortality rates) of institutions varying in caseload, and indicated their preferred institutions. Accuracy for extracting objective information was high, unless question phrasing was inconsistent with the plot's survival/mortality framing, or participants had low numeracy levels. Participants integrated caseload-volume and outcome-rate data when forming preferences, but were influenced by reference lines on the plot to make inappropriate discriminations between institutions with similar outcome rates. With careful choice of accompanying language, funnel plots can be readily understood and are therefore a useful tool for communicating risk. However, they are less effective as a decision aid for individual patient's treatment decisions, and we recommend refinements to the standard presentation of the plots if they are to be used for that purpose. © 2014 The British Psychological Society.

  12. A Visual Basic program to plot sediment grain-size data on ternary diagrams

    Science.gov (United States)

    Poppe, L.J.; Eliason, A.H.

    2008-01-01

    Sedimentologic datasets are typically large and compiled into tables or databases, but pure numerical information can be difficult to understand and interpret. Thus, scientists commonly use graphical representations to reduce complexities, recognize trends and patterns in the data, and develop hypotheses. Of the graphical techniques, one of the most common methods used by sedimentologists is to plot the basic gravel, sand, silt, and clay percentages on equilateral triangular diagrams. This means of presenting data is simple and facilitates rapid classification of sediments and comparison of samples.The original classification scheme developed by Shepard (1954) used a single ternary diagram with sand, silt, and clay in the corners and 10 categories to graphically show the relative proportions among these three grades within a sample. This scheme, however, did not allow for sediments with significant amounts of gravel. Therefore, Shepard's classification scheme was later modified by the addition of a second ternary diagram with two categories to account for gravel and gravelly sediment (Schlee, 1973). The system devised by Folk (1954, 1974)\\ is also based on two triangular diagrams, but it has 21 categories and uses the term mud (defined as silt plus clay). Patterns within the triangles of both systems differ, as does the emphasis placed on gravel. For example, in the system described by Shepard, gravelly sediments have more than 10% gravel; in Folk's system, slightly gravelly sediments have as little as 0.01% gravel. Folk's classification scheme stresses gravel because its concentration is a function of the highest current velocity at the time of deposition as is the maximum grain size of the detritus that is available; Shepard's classification scheme emphasizes the ratios of sand, silt, and clay because they reflect sorting and reworking (Poppe et al., 2005).The program described herein (SEDPLOT) generates verbal equivalents and ternary diagrams to

  13. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  14. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    Science.gov (United States)

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  15. SnagPRO: snag and tree sampling and analysis methods for wildlife

    Science.gov (United States)

    Lisa J. Bate; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe sampling methods and provide software to accurately and efficiently estimate snag and tree densities at desired scales to meet a variety of research and management objectives. The methods optimize sampling effort by choosing a plot size appropriate for the specified forest conditions and sampling goals. Plot selection and data analyses are supported by...

  16. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  17. Looking at large data sets using binned data plots

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.

    1990-04-01

    This report addresses the monumental challenge of developing exploratory analysis methods for large data sets. The goals of the report are to increase awareness of large data sets problems and to contribute simple graphical methods that address some of the problems. The graphical methods focus on two- and three-dimensional data and common task such as finding outliers and tail structure, assessing central structure and comparing central structures. The methods handle large sample size problems through binning, incorporate information from statistical models and adapt image processing algorithms. Examples demonstrate the application of methods to a variety of publicly available large data sets. The most novel application addresses the too many plots to examine'' problem by using cognostics, computer guiding diagnostics, to prioritize plots. The particular application prioritizes views of computational fluid dynamics solution sets on the fly. That is, as each time step of a solution set is generated on a parallel processor the cognostics algorithms assess virtual plots based on the previous time step. Work in such areas is in its infancy and the examples suggest numerous challenges that remain. 35 refs., 15 figs.

  18. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  19. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. Tamanho e forma de parcela em experimentos com morangueiro cultivado em solo ou em hidroponia Plot size and shape in trials using strawberry cultivated with soil or using hydroponics

    Directory of Open Access Journals (Sweden)

    Carine Cocco

    2009-07-01

    Full Text Available O objetivo deste trabalho foi estimar a forma e o tamanho de parcela ótimos para ensaios com a cultura do morangueiro (Fragaria x ananassa em cultivo hidropônico e em solo. Foram conduzidos dois, experimentos, um em cultivo convencional no solo, em túneis baixos, e outro em cultivo hidropônico. Em cada experimento, avaliaram-se os efeitos do tamanho e do formato das parcelas sobre a precisão experimental. Cada planta foi considerada uma unidade básica, e o número de unidades básicas por parcela variou de 1 (48 parcelas a 24 (duas parcelas. Foram ajustadas funções para a determinação do coeficiente de variação entre as parcelas e para a determinação da variância por unidade básica entre as parcelas. O cultivo no solo apresentou maior variabilidade experimental que o cultivo hidropônico. O aumento no número de plantas por parcela causou redução acentuada na variabilidade experimental, especialmente quando se usou o formato de parcela retangular. O tamanho ótimo estimado das parcelas é de dez plantas, no cultivo com solo, e de seis plantas, no cultivo hidropônico.The objective of this work was to estimate the optimal size and shape of plots to be used in experiments of strawberry (Fragaria x ananassa cultivation in soil or using hydroponics. Two experiments were conducted, one in soil in low tunnels, and another in a hydroponic system. In each experiment, the effects of plot sizes and shapes on experimental accuracy were evaluated. Each plant was considered an experimental basic unit, and the number of plants per plot varied from 1 (48 plots to 24 (two plots. Functions were adjusted to determine the coefficient of variation among plots and the variance per basic unit between plots. Plants grown in soil had higher experimental variability than the plants grown in hydroponics. Increasing the number of plants per plot caused strong reduction in the experimental variability, especially when a rectangular plot shape was used

  1. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  2. Detecting small-study effects and funnel plot asymmetry in meta-analysis of survival data: A comparison of new and existing tests.

    Science.gov (United States)

    Debray, Thomas P A; Moons, Karel G M; Riley, Richard D

    2018-03-01

    Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size, or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose 2 new tests that are based on the total number of observed events and adopt a multiplicative variance component. We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0% to 90%), and the mechanism leading to participant dropout (noninformative versus informative). Results demonstrate that previous well-known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant dropout. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis. © 2017 The Authors. Research Synthesis Methods Published by John Wiley & Sons, Ltd.

  3. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Importance of the Correlation between Width and Length in the Shape Analysis of Nanorods: Use of a 2D Size Plot To Probe Such a Correlation.

    Science.gov (United States)

    Zhao, Zhihua; Zheng, Zhiqin; Roux, Clément; Delmas, Céline; Marty, Jean-Daniel; Kahn, Myrtil L; Mingotaud, Christophe

    2016-08-22

    Analysis of nanoparticle size through a simple 2D plot is proposed in order to extract the correlation between length and width in a collection or a mixture of anisotropic particles. Compared to the usual statistics on the length associated with a second and independent statistical analysis of the width, this simple plot easily points out the various types of nanoparticles and their (an)isotropy. For each class of nano-objects, the relationship between width and length (i.e., the strong or weak correlations between these two parameters) may suggest information concerning the nucleation/growth processes. It allows one to follow the effect on the shape and size distribution of physical or chemical processes such as simple ripening. Various electron microscopy pictures from the literature or from the authors' own syntheses are used as examples to demonstrate the efficiency and simplicity of the proposed 2D plot combined with a multivariate analysis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  7. [Practical aspects regarding sample size in clinical research].

    Science.gov (United States)

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  8. Chitosan-based hydrogel tissue scaffolds made by 3D plotting promotes osteoblast proliferation and mineralization.

    Science.gov (United States)

    Liu, I-Hsin; Chang, Shih-Hsin; Lin, Hsin-Yi

    2015-05-13

    A 3D plotting system was used to make chitosan-based tissue scaffolds with interconnected pores using pure chitosan (C) and chitosan cross-linked with pectin (CP) and genipin (CG). A freeze-dried chitosan scaffold (CF/D) was made to compare with C, to observe the effects of structural differences. The fiber size, pore size, porosity, compression strength, swelling ratio, drug release efficacy, and cumulative weight loss of the scaffolds were measured. Osteoblasts were cultured on the scaffolds and their proliferation, type I collagen production, alkaline phosphatase activity, calcium deposition, and morphology were observed. C had a lower swelling ratio, degradation, porosity and drug release efficacy and a higher compressional stiffness and cell proliferation compared to CF/D (p 3D-plotted samples, cells on CP exhibited the highest degree of mineralization after 21 d (p 3D-plotted scaffolds were stronger, less likely to degrade and better promoted osteoblast cell proliferation in vitro compared to the freeze-dried scaffolds. C, CP and CG were structurally similar, and the different crosslinking caused significant changes in their physical and biological performances.

  9. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  11. Tamanho de parcela para produtividade de grãos de sorgo granífero em diferentes densidades de plantas Experimental plot size in grain sorghum in different plant densities

    Directory of Open Access Journals (Sweden)

    Sidinei José Lopes

    2005-06-01

    Full Text Available O objetivo deste trabalho foi verificar a influência do arranjo de plantas na estimativa do tamanho ótimo de parcela da cultura de sorgo granífero, para a variável produtividade de grãos. O delineamento estatístico utilizado foi o de blocos ao acaso, num esquema fatorial com dois espaçamentos entre linhas (0,50 m e 0,80 m, três densidades de semeadura (100 mil, 160 mil e 220 mil plantas ha-1 e quatro repetições. Cada repetição foi composta, na área útil, por 12 unidades básicas de 0,50 m da linha de cultivo. Foram ajustados modelos, na estimativa do tamanho ótimo de parcela, que relacionam a variância ou o coeficiente de variação com quatro tamanhos simulados das parcelas. O tamanho estimado de parcelas, na cultura de sorgo granífero, é de 3,2 m² para a variável produtividade de grãos. O aumento do número de plantas, na linha, não proporciona incrementos na produtividade de grãos, porém resulta em melhorias da qualidade de experimentos com sorgo. A estimativa do tamanho ótimo de parcela depende do número de plantas utilizadas na unidade básica. O espaçamento entre linhas não influencia na estimativa do tamanho ótimo de parcela.The objective of this work was to establish plant arrangement effect on the optimal plot size estimates of grain sorghum yield experiments. The experimental design was a completely randomized block with factorial combination of two row spacings (0.50 m and 0.80 m, and three plant densities (100 thousand, 160 thousand and 220 thousand plants ha-1, and four replications. The area of each replication was composed by 12 basic units measuring 0.50 m in row length. Models were adjusted in optimum plot size estimates that correlate variance or variation coefficient with four simulated plot sizes. Plot size was 3.2 m² for grain sorghum yield experiments. Increasing number of plants in row did not result in higher seed yields, however it improved sorghum quality experiment. Estimation of the ideal

  12. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  13. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  14. Scaling wood volume estimates from inventory plots to landscapes with airborne LiDAR in temperate deciduous forest

    Directory of Open Access Journals (Sweden)

    Shaun R. Levick

    2016-05-01

    Full Text Available Abstract Background Monitoring and managing carbon stocks in forested ecosystems requires accurate and repeatable quantification of the spatial distribution of wood volume at landscape to regional scales. Grid-based forest inventory networks have provided valuable records of forest structure and dynamics at individual plot scales, but in isolation they may not represent the carbon dynamics of heterogeneous landscapes encompassing diverse land-management strategies and site conditions. Airborne LiDAR has greatly enhanced forest structural characterisation and, in conjunction with field-based inventories, it provides avenues for monitoring carbon over broader spatial scales. Here we aim to enhance the integration of airborne LiDAR surveying with field-based inventories by exploring the effect of inventory plot size and number on the relationship between field-estimated and LiDAR-predicted wood volume in deciduous broad-leafed forest in central Germany. Results Estimation of wood volume from airborne LiDAR was most robust (R2 = 0.92, RMSE = 50.57 m3 ha−1 ~14.13 Mg C ha−1 when trained and tested with 1 ha experimental plot data (n = 50. Predictions based on a more extensive (n = 1100 plot network with considerably smaller (0.05 ha plots were inferior (R2 = 0.68, RMSE = 101.01 ~28.09 Mg C ha−1. Differences between the 1 and 0.05 ha volume models from LiDAR were negligible however at the scale of individual land-management units. Sample size permutation tests showed that increasing the number of inventory plots above 350 for the 0.05 ha plots returned no improvement in R2 and RMSE variability of the LiDAR-predicted wood volume model. Conclusions Our results from this study confirm the utility of LiDAR for estimating wood volume in deciduous broad-leafed forest, but highlight the challenges associated with field plot size and number in establishing robust relationships between airborne LiDAR and field derived wood volume. We

  15. Scaling wood volume estimates from inventory plots to landscapes with airborne LiDAR in temperate deciduous forest.

    Science.gov (United States)

    Levick, Shaun R; Hessenmöller, Dominik; Schulze, E-Detlef

    2016-12-01

    Monitoring and managing carbon stocks in forested ecosystems requires accurate and repeatable quantification of the spatial distribution of wood volume at landscape to regional scales. Grid-based forest inventory networks have provided valuable records of forest structure and dynamics at individual plot scales, but in isolation they may not represent the carbon dynamics of heterogeneous landscapes encompassing diverse land-management strategies and site conditions. Airborne LiDAR has greatly enhanced forest structural characterisation and, in conjunction with field-based inventories, it provides avenues for monitoring carbon over broader spatial scales. Here we aim to enhance the integration of airborne LiDAR surveying with field-based inventories by exploring the effect of inventory plot size and number on the relationship between field-estimated and LiDAR-predicted wood volume in deciduous broad-leafed forest in central Germany. Estimation of wood volume from airborne LiDAR was most robust (R 2  = 0.92, RMSE = 50.57 m 3 ha -1  ~14.13 Mg C ha -1 ) when trained and tested with 1 ha experimental plot data (n = 50). Predictions based on a more extensive (n = 1100) plot network with considerably smaller (0.05 ha) plots were inferior (R 2  = 0.68, RMSE = 101.01 ~28.09 Mg C ha -1 ). Differences between the 1 and 0.05 ha volume models from LiDAR were negligible however at the scale of individual land-management units. Sample size permutation tests showed that increasing the number of inventory plots above 350 for the 0.05 ha plots returned no improvement in R 2 and RMSE variability of the LiDAR-predicted wood volume model. Our results from this study confirm the utility of LiDAR for estimating wood volume in deciduous broad-leafed forest, but highlight the challenges associated with field plot size and number in establishing robust relationships between airborne LiDAR and field derived wood volume. We are moving into a forest management era where

  16. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  17. The rainfall plot: its motivation, characteristics and pitfalls.

    Science.gov (United States)

    Domanska, Diana; Vodák, Daniel; Lund-Andersen, Christin; Salvatore, Stefania; Hovig, Eivind; Sandve, Geir Kjetil

    2017-05-18

    A visualization referred to as rainfall plot has recently gained popularity in genome data analysis. The plot is mostly used for illustrating the distribution of somatic cancer mutations along a reference genome, typically aiming to identify mutation hotspots. In general terms, the rainfall plot can be seen as a scatter plot showing the location of events on the x-axis versus the distance between consecutive events on the y-axis. Despite its frequent use, the motivation for applying this particular visualization and the appropriateness of its usage have never been critically addressed in detail. We show that the rainfall plot allows visual detection even for events occurring at high frequency over very short distances. In addition, event clustering at multiple scales may be detected as distinct horizontal bands in rainfall plots. At the same time, due to the limited size of standard figures, rainfall plots might suffer from inability to distinguish overlapping events, especially when multiple datasets are plotted in the same figure. We demonstrate the consequences of plot congestion, which results in obscured visual data interpretations. This work provides the first comprehensive survey of the characteristics and proper usage of rainfall plots. We find that the rainfall plot is able to convey a large amount of information without any need for parameterization or tuning. However, we also demonstrate how plot congestion and the use of a logarithmic y-axis may result in obscured visual data interpretations. To aid the productive utilization of rainfall plots, we demonstrate their characteristics and potential pitfalls using both simulated and real data, and provide a set of practical guidelines for their proper interpretation and usage.

  18. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  19. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  20. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  1. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  2. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  3. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  4. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Science.gov (United States)

    Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing

    2017-01-01

    Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  5. Towards the harmonization between National Forest Inventory and Forest Condition Monitoring. Consistency of plot allocation and effect of tree selection methods on sample statistics in Italy.

    Science.gov (United States)

    Gasparini, Patrizia; Di Cosmo, Lucio; Cenni, Enrico; Pompei, Enrico; Ferretti, Marco

    2013-07-01

    In the frame of a process aiming at harmonizing National Forest Inventory (NFI) and ICP Forests Level I Forest Condition Monitoring (FCM) in Italy, we investigated (a) the long-term consistency between FCM sample points (a subsample of the first NFI, 1985, NFI_1) and recent forest area estimates (after the second NFI, 2005, NFI_2) and (b) the effect of tree selection method (tree-based or plot-based) on sample composition and defoliation statistics. The two investigations were carried out on 261 and 252 FCM sites, respectively. Results show that some individual forest categories (larch and stone pine, Norway spruce, other coniferous, beech, temperate oaks and cork oak forests) are over-represented and others (hornbeam and hophornbeam, other deciduous broadleaved and holm oak forests) are under-represented in the FCM sample. This is probably due to a change in forest cover, which has increased by 1,559,200 ha from 1985 to 2005. In case of shift from a tree-based to a plot-based selection method, 3,130 (46.7%) of the original 6,703 sample trees will be abandoned, and 1,473 new trees will be selected. The balance between exclusion of former sample trees and inclusion of new ones will be particularly unfavourable for conifers (with only 16.4% of excluded trees replaced by new ones) and less for deciduous broadleaves (with 63.5% of excluded trees replaced). The total number of tree species surveyed will not be impacted, while the number of trees per species will, and the resulting (plot-based) sample composition will have a much larger frequency of deciduous broadleaved trees. The newly selected trees have-in general-smaller diameter at breast height (DBH) and defoliation scores. Given the larger rate of turnover, the deciduous broadleaved part of the sample will be more impacted. Our results suggest that both a revision of FCM network to account for forest area change and a plot-based approach to permit statistical inference and avoid bias in the tree sample

  6. Polar plot representation of time-resolved fluorescence.

    Science.gov (United States)

    Eichorst, John Paul; Wen Teng, Kai; Clegg, Robert M

    2014-01-01

    Measuring changes in a molecule's fluorescence emission is a common technique to study complex biological systems such as cells and tissues. Although the steady-state fluorescence intensity is frequently used, measuring the average amount of time that a molecule spends in the excited state (the fluorescence lifetime) reveals more detailed information about its local environment. The lifetime is measured in the time domain by detecting directly the decay of fluorescence following excitation by short pulse of light. The lifetime can also be measured in the frequency domain by recording the phase and amplitude of oscillation in the emitted fluorescence of the sample in response to repetitively modulated excitation light. In either the time or frequency domain, the analysis of data to extract lifetimes can be computationally intensive. For example, a variety of iterative fitting algorithms already exist to determine lifetimes from samples that contain multiple fluorescing species. However, recently a method of analysis referred to as the polar plot (or phasor plot) is a graphical tool that projects the time-dependent features of the sample's fluorescence in either the time or frequency domain into the Cartesian plane to characterize the sample's lifetime. The coordinate transformations of the polar plot require only the raw data, and hence, there are no uncertainties from extensive corrections or time-consuming fitting in this analysis. In this chapter, the history and mathematical background of the polar plot will be presented along with examples that highlight how it can be used in both cuvette-based and imaging applications.

  7. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  8. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  9. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  10. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  11. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  12. Effects of pocket gopher burrowing on cesium-133 distribution on engineered test plots

    International Nuclear Information System (INIS)

    Gonzales, G.J.; Saladen, M.T.; Hakonson, T.E.

    1995-01-01

    Very low levels of radionuclides exist on soil surfaces. Biological factors including vegetation and animal burrowing can influence the fate of these surface contaminants. Animal burrowing introduces variability in radionuclide migration that confounds estimation of nuclide migration pathways, risk assessment, and assessment of waste burial performance. A field study on the surface and subsurface erosional transport of surface-applied 133 Cs as affected by pocket gopher (Thomomys bottae) burrowing was conducted on simulated waste landfill caps at the Los Alamos National Laboratory in north central New Mexico. Surface loss of Cs, adhered to five soil particle size ranges, was measured several times over an 18-mo period while simulated rainfalls were in progress. Gophers reduced Cs surface loss by significant amounts, 43%. Cesium surface loss on plots with only gophers was 0.8 kg totalled for the study period. This compared with 1.4 kg for control plots, 0.5 kg for vegetated plots, and 0.2 kg for plots with both gophers and vegetation. The change in Cs surface loss over time was significant (P -1 ). Vegetation-bearing plots bad significant more total subsurface Cs (μ = 1.7 g kg -1 ) than plots without vegetation (μ = 0.8 g kg -1 ). An average of 97% of the subsurface Cs in plots with vegetation was located in the upper 15 cm of soil (SDR1 + SDR2) compared with 67% for plots without vegetation. Vegetation moderated the influence of gopher activity on the transport of Cs to soil subsurface, and stabilized subsurface Cs by concentrating it in the rhizosphere. Gopher activity may have caused Cs transport to depths below that sampled, 30 cm. The results provide distribution coefficients for models of contaminant migration where animal burrowing occurs. 35 refs., 2 figs., 3 tabs

  13. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  14. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  15. Multiple plots in R

    DEFF Research Database (Denmark)

    Edwards, Stefan McKinnon

    2012-01-01

    In this chapter I will investigate how to combine multiple plots into a single. The scenario is a dataset of a series of measurements, on three samples in three situations. There are many ways we can display this, e.g. 3d graphs or faceting. 3d graphs are not good for displaying static data so we...

  16. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  17. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  19. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  20. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.

  1. Sampling efficiency for species composition assessments using the ...

    African Journals Online (AJOL)

    A pilot survey to determine the sampling efficiency of the wheel-point method, using the nearest plant method, to assess species composition (using replicate similarity related to sampling intensity, and total sampling time) was conducted on three plot sizes (20 x 20m, 30 x 30m, 40 x 40m) at two sites in a semi-arid savanna.

  2. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  3. ABCASH plotting program users guide

    International Nuclear Information System (INIS)

    Troyer, G.L.

    1995-01-01

    The Automated Bar Coding of Air Samples at Hanford (ABCASH) system provides an integrated data collection, sample tracking, and data reporting system for radioactive particulate air filter samples. The ABCASH plotting program provides a graphical trend report for ABCASH of the performance of air sample results. This document provides an operational guide for using the program. Based on sample location identifier and date range, a trend chart of the available data is generated. The trend chart shows radiological activity versus time. General indications of directional trend of the concentrations in air over time may be discerned. Comparison limit set point values are also shown as derived from the ABCASH data base

  4. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  5. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  6. Delineamento experimental e tamanho de amostra para alface cultivada em hidroponia Experimental design and sample size for hydroponic lettuce crop

    Directory of Open Access Journals (Sweden)

    Valéria Schimitz Marodim

    2000-10-01

    Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.

  7. FLOWCHART; a computer program for plotting flowcharts

    Science.gov (United States)

    Bender, Bernice

    1982-01-01

    The computer program FLOWCHART can be used to very quickly and easily produce flowcharts of high quality for publication. FLOWCHART centers each element or block of text that it processes on one of a set of (imaginary) vertical lines. It can enclose a text block in a rectangle, circle or other selected figure. It can draw a 'line connecting the midpoint of any side of any figure with the midpoint of any side of any other figure and insert an arrow pointing in the direction of flow. It can write 'yes' or 'no' next to the line joining two figures. FLOWCHART creates flowcharts using some basic plotting subroutine* which permit plots to be generated interactively and inspected on a Tektronix compatible graphics screen or plotted in a deferred mode on a Houston Instruments 42' pen plotter. The size of the plot, character set and character height in inches are inputs to the program. Plots generated using the pen plotter can be up to 42' high--the larger size plots being directly usable as visual aids in a talk. FLOWCHART centers each block of text on an imaginary column line. (The number of columns and column width are specified as input.) The midpoint of the longest line of text within the block is defined to be the center of the block and is placed on the column line. The spacing of individual words within the block is not altered when the block is positioned. The program writes the first block of text in a designated column and continues placing each subsequent block below the previous block in the same column. A block of text may be placed in a different column by specifying the number of the column and an earlier block of text with which the new block is to be aligned. If block zero is given as the earlier block, the new text is placed in the new column continuing down the page below the previous block. Optionally a column and number of inches from the top of the page may be given for positioning the next block of text. The program will normally draw one of five

  8. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  9. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  10. Inventory implications of using sampling variances in estimation of growth model coefficients

    Science.gov (United States)

    Albert R. Stage; William R. Wykoff

    2000-01-01

    Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...

  11. SEGY to ASCII: Conversion and Plotting Program

    Science.gov (United States)

    Goldman, Mark R.

    1999-01-01

    This report documents a computer program to convert standard 4 byte, IBM floating point SEGY files to ASCII xyz format. The program then optionally plots the seismic data using the GMT plotting package. The material for this publication is contained in a standard tar file (of99-126.tar) that is uncompressed and 726 K in size. It can be downloaded by any Unix machine. Move the tar file to the directory you wish to use it in, then type 'tar xvf of99-126.tar' The archive files (and diskette) contain a NOTE file, a README file, a version-history file, source code, a makefile for easy compilation, and an ASCII version of the documentation. The archive files (and diskette) also contain example test files, including a typical SEGY file along with the resulting ASCII xyz and postscript files. Requirements for compiling the source code into an executable are a C++ compiler. The program has been successfully compiled using Gnu's g++ version 2.8.1, and use of other compilers may require modifications to the existing source code. The g++ compiler is a free, high quality C++ compiler and may be downloaded from the ftp site: ftp://ftp.gnu.org/gnu Requirements for plotting the seismic data is the existence of the GMT plotting package. The GMT plotting package may be downloaded from the web site: http://www.soest.hawaii.edu/gmt/

  12. Product plots.

    Science.gov (United States)

    Wickham, Hadley; Hofmann, Heike

    2011-12-01

    We propose a new framework for visualising tables of counts, proportions and probabilities. We call our framework product plots, alluding to the computation of area as a product of height and width, and the statistical concept of generating a joint distribution from the product of conditional and marginal distributions. The framework, with extensions, is sufficient to encompass over 20 visualisations previously described in fields of statistical graphics and infovis, including bar charts, mosaic plots, treemaps, equal area plots and fluctuation diagrams. © 2011 IEEE

  13. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  14. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  16. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  17. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  18. Plot 3

    DEFF Research Database (Denmark)

    Lund, Inger-Lise; Gjessing, Susanne; Hermansen, Anne-Mette

    Plot 3 er første udgivelse af et alsidigt dansksystem til mellemtrinnet, hvor digitale medier er integreret i den daglige undervisning.......Plot 3 er første udgivelse af et alsidigt dansksystem til mellemtrinnet, hvor digitale medier er integreret i den daglige undervisning....

  19. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  20. Genome U-Plot: a whole genome visualization.

    Science.gov (United States)

    Gaitatzes, Athanasios; Johnson, Sarah H; Smadbeck, James B; Vasmatzis, George

    2018-05-15

    The ability to produce and analyze whole genome sequencing (WGS) data from samples with structural variations (SV) generated the need to visualize such abnormalities in simplified plots. Conventional two-dimensional representations of WGS data frequently use either circular or linear layouts. There are several diverse advantages regarding both these representations, but their major disadvantage is that they do not use the two-dimensional space very efficiently. We propose a layout, termed the Genome U-Plot, which spreads the chromosomes on a two-dimensional surface and essentially quadruples the spatial resolution. We present the Genome U-Plot for producing clear and intuitive graphs that allows researchers to generate novel insights and hypotheses by visualizing SVs such as deletions, amplifications, and chromoanagenesis events. The main features of the Genome U-Plot are its layered layout, its high spatial resolution and its improved aesthetic qualities. We compare conventional visualization schemas with the Genome U-Plot using visualization metrics such as number of line crossings and crossing angle resolution measures. Based on our metrics, we improve the readability of the resulting graph by at least 2-fold, making apparent important features and making it easy to identify important genomic changes. A whole genome visualization tool with high spatial resolution and improved aesthetic qualities. An implementation and documentation of the Genome U-Plot is publicly available at https://github.com/gaitat/GenomeUPlot. vasmatzis.george@mayo.edu. Supplementary data are available at Bioinformatics online.

  1. The Half-Half Plot

    NARCIS (Netherlands)

    Einmahl, J.H.J.; Gantner, M.

    2009-01-01

    The Half-Half (HH) plot is a new graphical method to investigate qualitatively the shape of a regression curve. The empirical HH-plot counts observations in the lower and upper quarter of a strip that moves horizontally over the scatter plot. The plot displays jumps clearly and reveals further

  2. Inferring the demographic history from DNA sequences: An importance sampling approach based on non-homogeneous processes.

    Science.gov (United States)

    Ait Kaci Azzou, S; Larribe, F; Froda, S

    2016-10-01

    In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.

  3. Monitoring larval populations of the Douglas-fir tussock moth and the western spruce budworm on permanent plots: sampling methods and statistical properties of data

    Science.gov (United States)

    A.R. Mason; H.G. Paul

    1994-01-01

    Procedures for monitoring larval populations of the Douglas-fir tussock moth and the western spruce budworm are recommended based on many years experience in sampling these species in eastern Oregon and Washington. It is shown that statistically reliable estimates of larval density can be made for a population by sampling host trees in a series of permanent plots in a...

  4. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  5. The impact of forest structure and spatial scale on the relationship between ground plot above ground biomass and GEDI lidar waveforms

    Science.gov (United States)

    Armston, J.; Marselis, S.; Hancock, S.; Duncanson, L.; Tang, H.; Kellner, J. R.; Calders, K.; Disney, M.; Dubayah, R.

    2017-12-01

    The NASA Global Ecosystem Dynamics Investigation (GEDI) will place a multi-beam waveform lidar instrument on the International Space Station (ISS) to provide measurements of forest vertical structure globally. These measurements of structure will underpin empirical modelling of above ground biomass density (AGBD) at the scale of individual GEDI lidar footprints (25m diameter). The GEDI pre-launch calibration strategy for footprint level models relies on linking AGBD estimates from ground plots with GEDI lidar waveforms simulated from coincident discrete return airborne laser scanning data. Currently available ground plot data have variable and often large uncertainty at the spatial resolution of GEDI footprints due to poor colocation, allometric model error, sample size and plot edge effects. The relative importance of these sources of uncertainty partly depends on the quality of ground measurements and region. It is usually difficult to know the magnitude of these uncertainties a priori so a common approach to mitigate their influence on model training is to aggregate ground plot and waveform lidar data to a coarser spatial scale (0.25-1ha). Here we examine the impacts of these principal sources of uncertainty using a 3D simulation approach. Sets of realistic tree models generated from terrestrial laser scanning (TLS) data or parametric modelling matched to tree inventory data were assembled from four contrasting forest plots across tropical rainforest, deciduous temperate forest, and sclerophyll eucalypt woodland sites. These tree models were used to simulate geometrically explicit 3D scenes with variable tree density, size class and spatial distribution. GEDI lidar waveforms are simulated over ground plots within these scenes using monte carlo ray tracing, allowing the impact of varying ground plot and waveform colocation error, forest structure and edge effects on the relationship between ground plot AGBD and GEDI lidar waveforms to be directly assessed. We

  6. Plot-size for {sup 15}N-fertilizer recovery studies by tanzania-grass; Tamanho da parcela para estudos de recuperacao de fertilizante-{sup 15}N por capim-tanzania

    Energy Technology Data Exchange (ETDEWEB)

    Martha Junior, Geraldo Bueno [EMBRAPA Cerrados, Planaltina, DF (Brazil)], e-mail: gbmartha@cpac.embrapa.br; Trivelin, Paulo Cesar Ocheuze [Centro de Energia Nuclear na Agricultura (CENA/USP), Piracicaba, SP (Brazil). Lab. de Isotopos Estaveis], e-mail: pcotrive@cena.usp.br; Corsi, Moacyr [Escola Superior de Agricultura Luiz de Queiroz (ESALQ/USP), Piracicaba, SP (Brazil). Dept. de Zootecnia], e-mail: moa@esalq.usp.br

    2009-07-01

    The understanding of the N dynamics in pasture ecosystems can be improved by studies using the {sup 15}N tracer technique. However, in these experiments it must be ensured that the lateral movement of the labeled fertilizer does not interfere with the results. In this study the plot-size requirements for {sup 15}N-fertilizer recovery experiments with irrigated Panicum maximum cv. Tanzania was determined. Three grazing intensities (light, moderate and intensive grazing) in the winter, spring and summer seasons were considered. A 1 m{sup 2} plot-size, with a grass tussock in the center, was adequate, irrespective of the grazing intensity or season of the year. Increasing the distance from the area fertilized with {sup 15}N negatively affected the N derived from fertilizer (Npfm) recovered in herbage.The lowest decline in Npfm values were observed for moderate and light grazing intensities. This fact might be explained by the vigorous growth characteristics of these plants. Increasing the grazing intensity decreased the tussock mass and, the smaller the tussock mass, the greater was the dependence on fertilizer nitrogen. (author)

  7. Trellis plots as visual aids for analyzing split plot experiments

    DEFF Research Database (Denmark)

    Kulahci, Murat; Menon, Anil

    2017-01-01

    The analysis of split plot experiments can be challenging due to a complicated error structure resulting from restrictions on complete randomization. Similarly, standard visualization methods do not provide the insight practitioners desire to understand the data, think of explanations, generate...... hypotheses, build models, or decide on next steps. This article demonstrates the effective use of trellis plots in the preliminary data analysis for split plot experiments to address this problem. Trellis displays help to visualize multivariate data by allowing for conditioning in a general way. They can...

  8. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  10. Size and form of plots for the culture of the Italian pumpkin in plastic greenhouse Tamanho e forma de parcelas para a cultura da abóbora italiana cultivada em estufa plástica

    Directory of Open Access Journals (Sweden)

    Rodrigo Machado Mello

    2004-01-01

    Full Text Available Quality control is the guaranty that experimental error is kept under acceptable levels, and the definition of the proper size and form of experimental plots ensures acurate experimental planning. This paper aims to determine the proper plot size and shape for the culture of the Italian pumpkin in protected environments. Two experiments were set in plastic green house in distinct crop seasons: either Summer-Fall or Winter-Spring season. The experiment comprised eight, 23-m long lines with 20 plants each, and fruit weight was considered the main performance parameter. Estimates of best plots size and shape were obtained by the maximum curvature, variance comparison and Hatheway methods. The plot size and shape varied according to the season and the ideal size and shape, according to the maximum curvature and Hatheway methods, to the Summer-Fall and Winter-Spring seasons, were eight plants (4 ´ 2 plot and four plants (2 ´ 2 plot, respectively.Na experimentação é de fundamental importância o controle de qualidade, fazendo com que o erro experimental apresente-se em níveis aceitáveis e, uma das formas desse controle, é a definição do tamanho e forma ótimos de parcela, no planejamento experimental. Determinou-se o tamanho e a forma ideais de parcela, para a cultura da abóbora italiana, em experimentos realizados sob ambientes protegidos, avaliando-se o rendimento acumulado da fitomassa dos frutos. Foram conduzidos dois experimentos em estufa plástica, modelo arco pampeano, durante o ano de 2001, em duas épocas de cultivo, um na estação sazonal verão-outono e outro na estação sazonal inverno-primavera. Os experimentos foram constituídos de oito fileiras de 20m de comprimento, sendo que, em cada uma delas, foram alocadas 20 plantas. As estimativas do tamanho e da forma de parcela foram obtidas pelos métodos da máxima curvatura, da comparação das variâncias e de Hatheway. As estimativas do tamanho e da forma da parcela variaram

  11. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  12. 137Cs profiles in erosion plots with different soil cultivation

    International Nuclear Information System (INIS)

    Andrello, A.C.; Appoloni, C.R.; Cassol, E.A.; Melquiades, F.L.

    2006-01-01

    Cesium-137 methodology has been successfully used to assess soil erosion. Seven erosion plots were sampled to determine the 137 Cs profile and to assess the erosion rates. Cesium-137 profile for native pasture plot showed an exponential decline below 5 cm depth, with little 137 Cs activity in the superficial layer (0-5 cm). Cesium-137 profile for wheat-soybean rotation plot in conventional tillage showed a uniform distribution with depth. For this plot, the soil loss occurs more in middle than upper and lower level. Cesium-137 profile for wheat-soybean rotation and wheat-maize rotation plots in no-tillage showed a similar result to the native pasture, with a minimum soil loss in the superficial layer. Cesium-137 profile for bare soil and cultivated pasture plots are similar, with a soil erosion rate of 229 t ha -1 year -1 . In the plots with a conventional tillage a greater soil loss occur in middle than upper and lower level. In no-tillage cultivation plots occurs soil loss in lower level, but no sign of soil loss neither gain in the upper level is observed. Cesium-137 methodology is a good tool to assess soil erosion and the 137 Cs profile gives a possibility to understand the soil erosion behavior in erosion plots. (author)

  13. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  14. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  15. Evaluating uncertainty in 7Be-based soil erosion estimates: an experimental plot approach

    Science.gov (United States)

    Blake, Will; Taylor, Alex; Abdelli, Wahid; Gaspar, Leticia; Barri, Bashar Al; Ryken, Nick; Mabit, Lionel

    2014-05-01

    Soil erosion remains a major concern for the international community and there is a growing need to improve the sustainability of agriculture to support future food security. High resolution soil erosion data are a fundamental requirement for underpinning soil conservation and management strategies but representative data on soil erosion rates are difficult to achieve by conventional means without interfering with farming practice and hence compromising the representativeness of results. Fallout radionuclide (FRN) tracer technology offers a solution since FRN tracers are delivered to the soil surface by natural processes and, where irreversible binding can be demonstrated, redistributed in association with soil particles. While much work has demonstrated the potential of short-lived 7Be (half-life 53 days), particularly in quantification of short-term inter-rill erosion, less attention has focussed on sources of uncertainty in derived erosion measurements and sampling strategies to minimise these. This poster outlines and discusses potential sources of uncertainty in 7Be-based soil erosion estimates and the experimental design considerations taken to quantify these in the context of a plot-scale validation experiment. Traditionally, gamma counting statistics have been the main element of uncertainty propagated and reported but recent work has shown that other factors may be more important such as: (i) spatial variability in the relaxation mass depth that describes the shape of the 7Be depth distribution for an uneroded point; (ii) spatial variability in fallout (linked to rainfall patterns and shadowing) over both reference site and plot; (iii) particle size sorting effects; (iv) preferential mobility of fallout over active runoff contributing areas. To explore these aspects in more detail, a plot of 4 x 35 m was ploughed and tilled to create a bare, sloped soil surface at the beginning of winter 2013/2014 in southwest UK. The lower edge of the plot was bounded by

  16. Rainfall-runoff-soil and nutrient loss relationships for plot size areas of bhetagad watershed in Central Himalaya, India

    Science.gov (United States)

    Kothyari, B. P.; Verma, P. K.; Joshi, B. K.; Kothyari, U. C.

    2004-06-01

    The Bhetagad watershed in Kumaon Hills of Central Himalaya represents for hydro-meteorological conditions of the middle mountains over the Hindu Kush Himalayas. This study was conducted to assess the runoff, soil loss and subsequent nutrient losses from different prominent land uses in the Bhetagad watershed of Central Himalayas. Four experimental natural plots each of 20 m length and 5 m width were delineated on four most common land covers viz, pine forests, tea plantation, rainfed agricultural and degraded lands. Monthly values of runoff, soil loss and nutrient loss, for four successive years (1998-2001), from these land uses were quantified following standard methodologies. The annual runoff in these plots ranged between 51 and 3593 m 3/ha while the annual soil loss varied between 0.06 and 5.47 tonnes/ha during the entire study period. The loss of organic matter was found to be maximum in plot having pine forest followed by plot having tea plantation as the land cover. Annual loss of total N (6.24 kg/ha), total P (3.88 kg/ha) and total K (5.98 kg/ha),per unit loss of soil (tonnes/ha), was maximum from the plot having rainfed agricultural crop as the land cover. The loss of total N ranged between 0.30 and 21.27 kg/ha, total P ranged between 0.14 and 9.42 kg/ha, total K ranged from 0.12 to 11.31 kg/ha whereas organic matter loss varied between 3.65 and 255.16 kg/ha, from different experimental plots. The findings will lead towards devising better conservation/management options for mountain land use systems.

  17. Size-exclusion chromatography of perfluorosulfonated ionomers.

    Science.gov (United States)

    Mourey, T H; Slater, L A; Galipo, R C; Koestner, R J

    2011-08-26

    A size-exclusion chromatography (SEC) method in N,N-dimethylformamide containing 0.1 M LiNO(3) is shown to be suitable for the determination of molar mass distributions of three classes of perfluorosulfonated ionomers, including Nafion(®). Autoclaving sample preparation is optimized to prepare molecular solutions free of aggregates, and a solvent exchange method concentrates the autoclaved samples to enable the use of molar-mass-sensitive detection. Calibration curves obtained from light scattering and viscometry detection suggest minor variation in the specific refractive index increment across the molecular size distributions, which introduces inaccuracies in the calculation of local absolute molar masses and intrinsic viscosities. Conformation plots that combine apparent molar masses from light scattering detection with apparent intrinsic viscosities from viscometry detection partially compensate for the variations in refractive index increment. The conformation plots are consistent with compact polymer conformations, and they provide Mark-Houwink-Sakurada constants that can be used to calculate molar mass distributions without molar-mass-sensitive detection. Unperturbed dimensions and characteristic ratios calculated from viscosity-molar mass relationships indicate unusually free rotation of the perfluoroalkane backbones and may suggest limitations to applying two-parameter excluded volume theories for these ionomers. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  20. Impact of shoe size in a sample of elderly individuals

    Directory of Open Access Journals (Sweden)

    Daniel López-López

    Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.

  1. The half-half plot

    NARCIS (Netherlands)

    Einmahl, J.H.J.; Gantner, M.

    2012-01-01

    The Half-Half (HH) plot is a new graphical method to investigate qualitatively the shape of a regression curve. The empirical HH-plot counts observations in the lower and upper quarter of a strip that moves horizontally over the scatterplot. The plot displays jumps clearly and reveals further

  2. Plasma physics plotting package

    International Nuclear Information System (INIS)

    Hyman, D.H.

    1981-02-01

    We describe a package of plotting routines that do up to six two- or three-dimensional plots on a frame with minimal loss of resolution. The package now runs on a PDP-10 with PLOT-10 TCS primitives and on a Control Data Corporation-7600 and a Cray-1 with TV80LIB primitives on the National Magnetic Fusion Energy Computer Center network. The package is portable to other graphics systems because only the primitive plot calls are used from the underlying system's graphics package

  3. MatrixPlot: visualizing sequence constraints

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Stærfeldt, Hans Henrik; Lund, Ole

    1999-01-01

    MatrixPlot: visualizing sequence constraints. Sub-title Abstract Summary : MatrixPlot is a program for making high-quality matrix plots, such as mutual information plots of sequence alignments and distance matrices of sequences with known three-dimensional coordinates. The user can add information...

  4. Distributed plot-making

    DEFF Research Database (Denmark)

    Jensen, Lotte Groth; Bossen, Claus

    2016-01-01

    different socio-technical systems (paper-based and electronic patient records). Drawing on the theory of distributed cognition and narrative theory, primarily inspired by the work done within health care by Cheryl Mattingly, we propose that the creation of overview may be conceptualised as ‘distributed plot......-making’. Distributed cognition focuses on the role of artefacts, humans and their interaction in information processing, while narrative theory focuses on how humans create narratives through the plot construction. Hence, the concept of distributed plot-making highlights the distribution of information processing...

  5. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  6. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  7. Confidence limits for contribution plots in multivariate statistical process control using bootstrap estimates.

    Science.gov (United States)

    Babamoradi, Hamid; van den Berg, Frans; Rinnan, Åsmund

    2016-02-18

    In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  9. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...

  10. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  11. NPLOT - NASTRAN PLOT

    Science.gov (United States)

    Mcentire, K.

    1994-01-01

    NPLOT is an interactive computer graphics program for plotting undeformed and deformed NASTRAN finite element models (FEMs). Although there are many commercial codes already available for plotting FEMs, these have limited use due to their cost, speed, and lack of features to view BAR elements. NPLOT was specifically developed to overcome these limitations. On a vector type graphics device the two best ways to show depth are by hidden line plotting or haloed line plotting. A hidden line algorithm generates views of models with all hidden lines removed, and a haloed line algorithm displays views with aft lines broken in order to show depth while keeping the entire model visible. A haloed line algorithm is especially useful for plotting models composed of many line elements and few surface elements. The most important feature of NPLOT is its ability to create both hidden line and haloed line views accurately and much more quickly than with any other existing hidden or haloed line algorithms. NPLOT is also capable of plotting a normal wire frame view to display all lines of a model. NPLOT is able to aid in viewing all elements, but it has special features not generally available for plotting BAR elements. These features include plotting of TRUE LENGTH and NORMALIZED offset vectors and orientation vectors. Standard display operations such as rotation and perspective are possible, but different view planes such as X-Y, Y-Z, and X-Z may also be selected. Another display option is the Z-axis cut which allows a portion of the fore part of the model to be cut away to reveal details of the inside of the model. A zoom function is available to terminals with a locator (graphics cursor, joystick, etc.). The user interface of NPLOT is designed to make the program quick and easy to use. A combination of menus and commands with help menus for detailed information about each command allows experienced users greater speed and efficiency. Once a plot is on the screen the interface

  12. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  13. Sample size in psychological research over the past 30 years.

    Science.gov (United States)

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  14. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  15. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  16. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  17. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  18. D-GENIES: dot plot large genomes in an interactive, efficient and simple way.

    Science.gov (United States)

    Cabanettes, Floréal; Klopp, Christophe

    2018-01-01

    Dot plots are widely used to quickly compare sequence sets. They provide a synthetic similarity overview, highlighting repetitions, breaks and inversions. Different tools have been developed to easily generated genomic alignment dot plots, but they are often limited in the input sequence size. D-GENIES is a standalone and web application performing large genome alignments using minimap2 software package and generating interactive dot plots. It enables users to sort query sequences along the reference, zoom in the plot and download several image, alignment or sequence files. D-GENIES is an easy-to-install, open-source software package (GPL) developed in Python and JavaScript. The source code is available at https://github.com/genotoul-bioinfo/dgenies and it can be tested at http://dgenies.toulouse.inra.fr/.

  19. EcoIS: An image serialization library for plot-based plant flowering phenology

    DEFF Research Database (Denmark)

    Granados, Joel A.; Bonnet, Philippe; Hansen, Lars Hostrup

    2013-01-01

    they are produced by introducing an open source Python (www.python.org) library called EcoIS that creates image series from unaligned pictures of specially equipped plots. We use EcoIS to sample flowering phenology plots in a high arctic environment and create image series that later generate phenophase counts...

  20. Validity of the t-plot method to assess microporosity in hierarchical micro/mesoporous materials.

    Science.gov (United States)

    Galarneau, Anne; Villemot, François; Rodriguez, Jeremy; Fajula, François; Coasne, Benoit

    2014-11-11

    The t-plot method is a well-known technique which allows determining the micro- and/or mesoporous volumes and the specific surface area of a sample by comparison with a reference adsorption isotherm of a nonporous material having the same surface chemistry. In this paper, the validity of the t-plot method is discussed in the case of hierarchical porous materials exhibiting both micro- and mesoporosities. Different hierarchical zeolites with MCM-41 type ordered mesoporosity are prepared using pseudomorphic transformation. For comparison, we also consider simple mechanical mixtures of microporous and mesoporous materials. We first show an intrinsic failure of the t-plot method; this method does not describe the fact that, for a given surface chemistry and pressure, the thickness of the film adsorbed in micropores or small mesopores (plot method to estimate the micro- and mesoporous volumes of hierarchical samples is then discussed, and an abacus is given to correct the underestimated microporous volume by the t-plot method.

  1. A comparison of two sampling approaches for assessing the urban forest canopy cover from aerial photography.

    Science.gov (United States)

    Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker

    2016-01-01

    Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...

  2. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  3. The role of forest type in the variability of DOC in atmospheric deposition at forest plots in Italy.

    Science.gov (United States)

    Arisci, S; Rogora, M; Marchetto, A; Dichiaro, F

    2012-06-01

    Dissolved organic carbon (DOC) was studied in atmospheric deposition samples collected on a weekly basis in 2005-2009 at 10 forest plots in Italy. The plots covered a wide range of geographical attributes and were representative of the main forest types in Italy. Both spatial and temporal variations in DOC concentrations and fluxes are discussed, with the aim of identifying the main factors affecting DOC variability. DOC concentration increased from bulk to throughfall and stemflow water samples at all sites, as an effect of leaching from leaves and branches, going from 0.7-1.7 mg C L(-1) in bulk samples to 1.8-15.8 mg C L(-1) in throughfall and 4.2-10.7 mg C L(-1) in stemflow, with striking differences among the various plots. Low concentrations were found in runoff (0.5-2.0 mg C L(-1)), showing that the export of DOC via running waters was limited. The seasonality of DOC in throughfall samples was evident, with the highest concentration in summer when biological activity is at a maximum, and minima in winter due to limited DOC production and leaching. Statistical analysis revealed that DOC had a close relationship with organic and total nitrogen, and with nutrient ions, and a negative correlation with precipitation amount. Forest type proved to be a major factor affecting DOC variability: concentration and, to a lesser extent, fluxes were lower in stands dominated by deciduous species. The character of evergreens, and the size and shape of their leaves and needles, which regulate the interception mechanism of dry deposition, are mainly responsible for this.

  4. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  5. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  6. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  7. The hexagon/panel system for selecting FIA plots under an annual inventory

    Science.gov (United States)

    Gary J. Brand; Mark D. Nelson; Daniel G. Wendt; Kevin K. Nimerfro

    2000-01-01

    Forest Inventory and Analysis (FIA) is changing to an annual nationwide forest inventory. This paper describes the sampling grid used to distribute FIA plots across the landscape and to allocate them to a particular measurement year. We also describe the integration of the F1A and Forest Health Monitoring (FHM) plot networks.

  8. Estimativas de tamanho de parcelas para avaliação de descritores fenotípicos em bananeira Estimates of plot size for the evaluation of phenotipics descriptors in banana

    Directory of Open Access Journals (Sweden)

    Sérgio Luiz Rodrigues Donato

    2008-08-01

    Full Text Available O objetivo deste trabalho foi estimar o tamanho adequado de parcelas experimentais, para avaliação de descritores fenotípicos em bananeira, a partir de um ensaio de uniformidade com a cultivar Tropical, no espaçamento de 3x2 m, em área útil com nove fileiras de 40 plantas. Foram avaliados: altura da planta; perímetro do pseudocaule; número de filhos emitidos e número de folhas vivas, no florescimento e na colheita; peso do cacho e das pencas; número de pencas e de frutos; peso da segunda penca; e peso, comprimento e diâmetro do fruto, em dois ciclos de produção. As plantas, consideradas unidades básicas, foram combinadas para formar diferentes tamanhos de parcelas. Os dados foram submetidos à análise de variância em modelo hierárquico. O tamanho da parcela foi estimado pelos métodos da máxima curvatura, máxima curvatura modificado e comparação de variâncias. Determinaram-se o índice de heterogeneidade do solo e a diferença detectável entre médias de tratamentos. A variabilidade aumentou entre os ciclos, com reflexos nos tamanhos de parcela, que variaram com o método utilizado, a variável avaliada e o ciclo de produção. O método da máxima curvatura modificado apresenta estimativas mais ajustadas. Parcelas com seis unidades básicas (36 m² são apropriadas para avaliar, com precisão, os descritores fenotípicos em bananeira.The aim of this work was to estimate the adequate size of experimental plots, for the evaluation of phenotypic descriptors in banana, from a trial of uniformity with the cultivar Tropical, in the spacing of 3x2 m, in a useful area with nine rows of 40 plants. The following variables were evaluated: plant height; pseudostem perimeter; number of emitted suckers and number live leaves, during flowering and at harvest; weight of the bunch and the hands; number of hands and fruits; weight of the second hand; weight, length and diameter of the fruit, in two cycles of production. The plants

  9. Surveillance of Site A and Plot M

    International Nuclear Information System (INIS)

    Golchert, N.W.

    1991-05-01

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for CY 1990 are presented. The surveillance program is the ongoing remedial action that resulted from the 1976-1978 radiological characterization of the site. That study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand-pumped picnic wells. The current program consists of sample collection and analysis of air, surface and subsurface water, and bottom sediment. The results of the analyses are used to determine the migration pathway of water from the burial ground (plot M) to the hand-pumped picnic wells, establish if buries radionuclides other than hydrogen-3 have migrated, and generally characterize the radiological environment of the area. Tritiated water continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The results of the surveillance program continues to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site or those living in the vicinity. 20 refs., 7 figs., 15 tabs

  10. In-situ polymerized PLOT columns III: divinylbenzene copolymers and dimethacrylate homopolymers

    Science.gov (United States)

    Shen, T. C.; Fong, M. M.

    1994-01-01

    Studies of divinylbenzene copolymers and dimethacrylate homopolymers indicate that the polymer pore size controls the separation of water and ammonia on porous-layer-open-tubular (PLOT) columns. To a lesser degree, the polarity of the polymers also affects the separation of a water-ammonia gas mixture. Our results demonstrate that the pore size can be regulated by controlling the cross-linking density or the chain length between the cross-linking functional groups. An optimum pore size will provide the best separation of water and ammonia.

  11. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  12. Splatterplots: overcoming overdraw in scatter plots.

    Science.gov (United States)

    Mayorga, Adrian; Gleicher, Michael

    2013-09-01

    We introduce Splatterplots, a novel presentation of scattered data that enables visualizations that scale beyond standard scatter plots. Traditional scatter plots suffer from overdraw (overlapping glyphs) as the number of points per unit area increases. Overdraw obscures outliers, hides data distributions, and makes the relationship among subgroups of the data difficult to discern. To address these issues, Splatterplots abstract away information such that the density of data shown in any unit of screen space is bounded, while allowing continuous zoom to reveal abstracted details. Abstraction automatically groups dense data points into contours and samples remaining points. We combine techniques for abstraction with perceptually based color blending to reveal the relationship between data subgroups. The resulting visualizations represent the dense regions of each subgroup of the data set as smooth closed shapes and show representative outliers explicitly. We present techniques that leverage the GPU for Splatterplot computation and rendering, enabling interaction with massive data sets. We show how Splatterplots can be an effective alternative to traditional methods of displaying scatter data communicating data trends, outliers, and data set relationships much like traditional scatter plots, but scaling to data sets of higher density and up to millions of points on the screen.

  13. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  14. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  15. Field Plot Techniques for Black Sigatoka Evaluation in East African Highland Bananas

    Directory of Open Access Journals (Sweden)

    Okoro, JU.

    1997-01-01

    Full Text Available Number of plants per experimental unit and number of replications for the efficient and precise assessment of black sigatoka leaf spot disease caused by Mycosphaerella fijiensis in East African Highland bananas were determined. Two representative cultivars were used. Host response to black sigatoka infection was measured by recording the youngest leaf with necrotic spots. The number of plants per experimental unit was determined, using the methods of maximum curvature and comparison of variances, while the number of replications was estimated by Hatheway's method. The optimum experimental plot size was 3 plants (18 m2 for the beer banana cultivar 'Igitsiri', and 30 plants (180 m2 for the cooking banana cultivar 'Igisahira Gisanzwe', using the comparison of variances method. However, the optimum plot size was 15 plants (90 m2 for both cultivars using the method of maximum curvature. The latter statistical method was preferred because of the low precision of the estimates in the former method. Unreplicated trials with plots of 15 plants could be adequate to assess black sigatoka response in East African bananas if uniform disease pressure exists.

  16. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  17. Difference and ratio plots

    DEFF Research Database (Denmark)

    Svendsen, Anders Jørgen; Holmskov, U; Bro, Peter

    1995-01-01

    and systemic lupus erythematosus from another previously published study (Macanovic, M. and Lachmann, P.J. (1979) Clin. Exp. Immunol. 38, 274) are also represented using ratio plots. Our observations indicate that analysis by regression analysis may often be misleading....... hitherto unnoted differences between controls and patients with either rheumatoid arthritis or systemic lupus erythematosus. For this we use simple, but unconventional, graphic representations of the data, based on difference plots and ratio plots. Differences between patients with Burkitt's lymphoma...

  18. Graphical augmentations to the funnel plot assess the impact of additional evidence on a meta-analysis.

    Science.gov (United States)

    Langan, Dean; Higgins, Julian P T; Gregory, Walter; Sutton, Alexander J

    2012-05-01

    We aim to illustrate the potential impact of a new study on a meta-analysis, which gives an indication of the robustness of the meta-analysis. A number of augmentations are proposed to one of the most widely used of graphical displays, the funnel plot. Namely, 1) statistical significance contours, which define regions of the funnel plot in which a new study would have to be located to change the statistical significance of the meta-analysis; and 2) heterogeneity contours, which show how a new study would affect the extent of heterogeneity in a given meta-analysis. Several other features are also described, and the use of multiple features simultaneously is considered. The statistical significance contours suggest that one additional study, no matter how large, may have a very limited impact on the statistical significance of a meta-analysis. The heterogeneity contours illustrate that one outlying study can increase the level of heterogeneity dramatically. The additional features of the funnel plot have applications including 1) informing sample size calculations for the design of future studies eligible for inclusion in the meta-analysis; and 2) informing the updating prioritization of a portfolio of meta-analyses such as those prepared by the Cochrane Collaboration. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  20. Size effects on electrical properties of chemically grown zinc oxide nanoparticles

    Science.gov (United States)

    Rathod, K. N.; Joshi, Zalak; Dhruv, Davit; Gadani, Keval; Boricha, Hetal; Joshi, A. D.; Solanki, P. S.; Shah, N. A.

    2018-03-01

    In the present article, we study ZnO nanoparticles grown by cost effective sol–gel technique for various electrical properties. Structural studies performed by x-ray diffraction (XRD) revealed hexagonal unit cell phase with no observed impurities. Transmission electron microscopy (TEM) and particle size analyzer showed increased average particle size due to agglomeration effect with higher sintering. Dielectric constant (ε‧) decreases with increase in frequency because of the disability of dipoles to follow higher electric field. With higher sintering, dielectric constant reduced owing to the important role of increased formation of oxygen vacancy defects. Universal dielectric response (UDR) was verified by straight line fitting of log (fε‧) versus log (f) plots. All samples exhibit UDR behavior and with higher sintering more contribution from crystal cores. Impedance studies suggest an important role of boundary density while Cole–Cole (Z″ versus Z‧) plots have been studied for the relaxation behavior of the samples. Average normalized change (ANC) in impedance has been studied for all the samples wherein boundaries play an important role. Frequency dependent electrical conductivity has been understood on the basis of Jonscher’s universal power law. Jonscher’s law fits suggest that conduction of charge carrier is possible in the context of correlated barrier hopping (CBH) mechanism for lower temperature sintered sample while for higher temperature sintered ZnO samples, Maxwell–Wagner (M–W) relaxation process has been determined.

  1. Sample size determination for disease prevalence studies with partially validated data.

    Science.gov (United States)

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  2. General Constraints on Sampling Wildlife on FIA Plots

    Science.gov (United States)

    Larissa L. Bailey; John R. Sauer; James D. Nichols; Paul H. Geissler

    2005-01-01

    This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species...

  3. Ultrasonic signal processing for sizing under-clad flaws

    International Nuclear Information System (INIS)

    Shankar, R.; Paradiso, T.J.; Lane, S.S.; Quinn, J.R.

    1985-01-01

    Ultrasonic digital data were collected from underclad cracks in sample pressure vessel specimen blocks. These blocks were weld cladded under different processes to simulate actual conditions in US Pressure Water Reactors. Each crack was represented by a flaw-echo dynamic curve which is a plot of the transducer motion on the surface as a function of the ultrasonic response into the material. Crack depth sizing was performed by identifying in the dynamic curve the crack tip diffraction signals from the upper and lower tips. This paper describes the experimental procedure, digital signal processing methods used and algorithms developed for crack depth sizing

  4. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  5. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  6. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    Science.gov (United States)

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  7. Hard magnetic property and δM(H) plot for sintered NdFeB magnet

    International Nuclear Information System (INIS)

    Gao, R.W.; Zhang, D.H.; Li, W.; Li, X.M.; Zhang, J.C.

    2000-01-01

    The hard magnetic properties and the interactions between the grains for sintered Nd 16 Fe 73 Co 5 B 6 magnets are investigated by using δM(H) plot technique. The results show that the δM(H) plot of NdFeB sintered magnet can explain the effects of the microstructure (size, shape and orientation of the grains) and the intergrain interactions on the hard magnetic properties of the magnet. However, the value of δM(H) is positive when the applied field is not strong enough, which means that the common δM(H) plot theory is not completely consistent with the sintered NdFeB magnet

  8. Isodose plotting for pen plotters

    International Nuclear Information System (INIS)

    Rosen, I.I.

    1985-01-01

    A general algorithm for treatment plan isodose line plotting is described which is particularly useful for pen plotters. Unlike other methods of plotting isodose lines, this algorithm is designed specifically to reduce pen motion, thereby reducing plotting time and wear on the transport mechanism. Points with the desired dose value are extracted from the dose matrix and stored, sorted into continuous contours, and then plotted. This algorithm has been implemented on DEC PDP-11/60 and VAX-11/780 computers for use with two models of Houston Instrument pen plotters, two models of Tektronix vector graphics terminals, a DEC VT125 raster graphics terminal, and a DEC VS11 color raster graphics terminal. Its execution time is similar to simpler direct-plotting methods

  9. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  10. Surveillance of Site A and Plot M

    International Nuclear Information System (INIS)

    Golchert, N.W.

    1993-05-01

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for CY 1992 are presented. The surveillance program is the ongoing remedial action that resulted from the 1976--1978 radiological characterization of the site. That study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand-pumped picnic wells. The current program consists of sample collection and analysis of air, surface and subsurface water, and bottom sediment. The results of the analyses are used to (1) determine the migration pathway of water from the burial ground (Plot M) to the hand-pumped picnic wells, (2) establish if buried radionuclides other than hydrogen-3 have migrated, and (3) generally characterize the radiological environment of the area. Hydrogen-3 in the Red Gate Woods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Tritiated water continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. For many years it was the only radionuclide found to have migrated in measurable quantities. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The available data does not allow a firm conclusion as to whether the presence of this nuclide represents recent migration or movement that may have occurred before Plot M was capped. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site, using the picnic area, or living in the vicinity

  11. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  12. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  13. Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws

    Science.gov (United States)

    Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.

    2009-04-01

    Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W

  14. Grain-size data from four cores from Walker Lake, Nevada

    International Nuclear Information System (INIS)

    Yount, J.C.; Quimby, M.F.

    1990-01-01

    A number of cores, taken from within and near Walker Lake, Nevada are being studied by various investigators in order to evaluate the late-Pleistocene paleoclimate of the west-central Great Basin. In particular, the cores provide records that can be interpreted in terms of past climate and compared to proposed numerical models of the region's climate. All of these studies are being carried out as part of an evaluation of the regional paleoclimatic setting of a proposed high-level nuclear waste storage facility at Yucca Mountain, Nevada. Changes in past climate often manifest themselves in changes in sedimentary processes or in changes in the volume of sediment transported by those processes. One fundamental sediment property that can be related to depositional processes is grain size. Grain size effects other physical properties of sediment such as porosity and permeability which, in turn, affect the movement and chemistry of fluids. The purposes of this report are: (1) to document procedures of sample preparation and analysis, and (2) to summarize grain-size statistics for 659 samples from Walker Lake cores 84-4, 84-5, 84-8 and 85-2. Plots of mean particle diameter, percent sand, and the ratio of silt to clay are illustrated for various depth intervals within each core. Summary plots of mean grain size, sorting, and skewness parameters allow comparison of textural data between each core. 15 refs., 8 figs., 3 tabs

  15. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  16. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  17. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  18. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  19. Impact of animal waste application on runoff water quality in field experimental plots.

    Science.gov (United States)

    Hill, Dagne D; Owens, William E; Tchoounwou, Paul B

    2005-08-01

    Animal waste from dairy and poultry operations is an economical and commonly used fertilizer in the state of Louisiana. The application of animal waste to pasture lands not only is a source of fertilizer, but also allows for a convenient method of waste disposal. The disposal of animal wastes on land is a potential nonpoint source of water degradation. Water degradation and human health is a major concern when considering the disposal of large quantities of animal waste. The objective of this research was to determine the effect of animal waste application on biological (fecal coliform, Enterobacter spp. and Escherichia coli) and physical/chemical (temperature, pH, nitrate nitrogen, ammonia nitrogen, phosphate, copper, zinc, and sulfate) characteristics of runoff water in experimental plots. The effects of the application of animal waste have been evaluated by utilizing experimental plots and simulated rainfall events. Samples of runoff water were collected and analyzed for fecal coliforms. Fecal coliforms isolated from these samples were identified to the species level. Chemical analysis was performed following standard test protocols. An analysis of temperature, ammonia nitrogen, nitrate nitrogen, iron, copper, phosphate, potassium, sulfate, zinc and bacterial levels was performed following standard test protocols as presented in Standard Methods for the Examination of Water and Wastewater [1]. In the experimental plots, less time was required in the tilled broiler litter plots for the measured chemicals to decrease below the initial pre-treatment levels. A decrease of over 50% was noted between the first and second rainfall events for sulfate levels. This decrease was seen after only four simulated rainfall events in tilled broiler litter plots whereas broiler litter plots required eight simulated rainfall events to show this same type of reduction. A reverse trend was seen in the broiler litter plots and the tilled broiler plots for potassium. Bacteria numbers

  20. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  2. Plotting system for the MINCS code

    International Nuclear Information System (INIS)

    Watanabe, Tadashi

    1990-08-01

    The plotting system for the MINCS code is described. The transient two-phase flow analysis code MINCS has been developed to provide a computational tool for analysing various two-phase flow phenomena in one-dimensional ducts. Two plotting systems, namely the SPLPLOT system and the SDPLOT system, can be used as the plotting functions. The SPLPLOT system is used for plotting time transients of variables, while the SDPLOT system is for spatial distributions. The SPLPLOT system is based on the SPLPACK system, which is used as a general tool for plotting results of transient analysis codes or experiments. The SDPLOT is based on the GPLP program, which is also regarded as one of the general plotting programs. In the SPLPLOT and the SDPLOT systems, the standardized data format called the SPL format is used in reading data to be plotted. The output data format of MINCS is translated into the SPL format by using the conversion system called the MINTOSPL system. In this report, how to use the plotting functions is described. (author)

  3. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  4. Tracking Changes in Cardiac Output: Statistical Considerations on the 4-Quadrant Plot and the Polar Plot Methodology.

    Science.gov (United States)

    Saugel, Bernd; Grothe, Oliver; Wagner, Julia Y

    2015-08-01

    When comparing 2 technologies for measuring hemodynamic parameters with regard to their ability to track changes, 2 graphical tools are omnipresent in the literature: the 4-quadrant plot and the polar plot recently proposed by Critchley et al. The polar plot is thought to be the more advanced statistical tool, but care should be taken when it comes to its interpretation. The polar plot excludes possibly important measurements from the data. The polar plot transforms the data nonlinearily, which may prevent it from being seen clearly. In this article, we compare the 4-quadrant and the polar plot in detail and thoroughly describe advantages and limitations of each. We also discuss pitfalls concerning the methods to prepare the researcher for the sound use of both methods. Finally, we briefly revisit the Bland-Altman plot for the use in this context.

  5. TREDRA, Minimal Cut Sets Fault Tree Plot Program

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: TREDRA is a computer program for drafting report-quality fault trees. The input to TREDRA is similar to input for standard computer programs that find minimal cut sets from fault trees. Output includes fault tree plots containing all standard fault tree logic and event symbols, gate and event labels, and an output description for each event in the fault tree. TREDRA contains the following features: a variety of program options that allow flexibility in the program output; capability for automatic pagination of the output fault tree, when necessary; input groups which allow labeling of gates, events, and their output descriptions; a symbol library which includes standard fault tree symbols plus several less frequently used symbols; user control of character size and overall plot size; and extensive input error checking and diagnostic oriented output. 2 - Method of solution: Fault trees are generated by user-supplied control parameters and a coded description of the fault tree structure consisting of the name of each gate, the gate type, the number of inputs to the gate, and the names of these inputs. 3 - Restrictions on the complexity of the problem: TREDRA can produce fault trees with a minimum of 3 and a maximum of 56 levels. The width of each level may range from 3 to 37. A total of 50 transfers is allowed during pagination

  6. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  7. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  8. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  9. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  10. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  11. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  12. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  13. Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory

    Science.gov (United States)

    Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo

    2013-01-01

    Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.

  14. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  15. Impact of Animal Waste Application on Runoff Water Quality in Field Experimental Plots

    Directory of Open Access Journals (Sweden)

    Paul B. Tchounwou

    2005-08-01

    Full Text Available Animal waste from dairy and poultry operations is an economical and commonly used fertilizer in the state of Louisiana. The application of animal waste to pasture lands not only is a source of fertilizer, but also allows for a convenient method of waste disposal. The disposal of animal wastes on land is a potential nonpoint source of water degradation. Water degradation and human health is a major concern when considering the disposal of large quantities of animal waste. The objective of this research was to determine the effect of animal waste application on biological (fecal coliform, Enterobacter spp. and Escherichia coli and physical/chemical (temperature, pH, nitrate nitrogen, ammonia nitrogen, phosphate, copper, zinc, and sulfate characteristics of runoff water in experimental plots. The effects of the application of animal waste have been evaluated by utilizing experimental plots and simulated rainfall events. Samples of runoff water were collected and analyzed for fecal coliforms. Fecal coliforms isolated from these samples were identified to the species level. Chemical analysis was performed following standard test protocols. An analysis of temperature, ammonia nitrogen, nitrate nitrogen, iron, copper, phosphate, potassium, sulfate, zinc and bacterial levels was performed following standard test protocols as presented in Standard Methods for the Examination of Water and Wastewater [1]. In the experimental plots, less time was required in the tilled broiler litter plots for the measured chemicals to decrease below the initial pre-treatment levels. A decrease of over 50% was noted between the first and second rainfall events for sulfate levels. This decrease was seen after only four simulated rainfall events in tilled broiler litter plots whereas broiler litter plots required eight simulated rainfall events to show this same type of reduction. A reverse trend was seen in the broiler litter plots and the tilled broiler plots for potassium

  16. Fifth International Symposium on Recurrence Plot

    CERN Document Server

    Riley, Michael; Giuliani, Alessandro; Webber, Charles; Jr, Jr; Translational Recurrences : From Mathematical Theory to Real-World Applications

    2014-01-01

    This book features 13 papers presented at the Fifth International Symposium on Recurrence Plots, held August 2013 in Chicago, IL. It examines recent applications and developments in recurrence plots and recurrence quantifi cation analysis (RQA) with special emphasis on biological and cognitive systems and the analysis of coupled systems using cross-recurrence methods. Readers will discover new applications and insights into a range of systems provided by recurrence plot analysis and new theoretical and mathematical developments in recurrence plots. Recurrence plot based analysis is a powerful tool that operates on real-world complex systems that are nonlinear, non-stationary, noisy, of any statistical distribution, free of any particular model type, and not particularly long. Quantitative analyses promote the detection of system state changes, synchronized dynamical regimes, or classifi cation of system states. Th e book will be of interest to an interdisciplinary audience of recurrence plot users and researc...

  17. A first look at measurement error on FIA plots using blind plots in the Pacific Northwest

    Science.gov (United States)

    Susanna Melson; David Azuma; Jeremy S. Fried

    2002-01-01

    Measurement error in the Forest Inventory and Analysis work of the Pacific Northwest Station was estimated with a recently implemented blind plot measurement protocol. A small subset of plots was revisited by a crew having limited knowledge of the first crew's measurements. This preliminary analysis of the first 18 months' blind plot data indicates that...

  18. deltaPlotR: An R Package for Di?erential Item Functioning Analysis with Ango? s Delta Plot

    OpenAIRE

    David Magis; Bruno Facon

    2014-01-01

    Angoff's delta plot is a straightforward and not computationally intensive method to identify differential item functioning (DIF) among dichotomously scored items. This approach was recently improved by proposing an optimal threshold selection and by considering several item purification processes. Moreover, to support practical DIF analyses with the delta plot and these improvements, the R package deltaPlotR was also developed. The purpose of this paper is twofold: to outline the delta plot ...

  19. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  20. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  1. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  2. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  3. The role of shore crabs and mussel density in mussel losses at a commercial intertidal mussel plot after seeding

    NARCIS (Netherlands)

    Capelle, Jacob J.; Scheiberlich, Gerard; Wijsman, Jeroen W.M.; Smaal, Aad C.

    2016-01-01

    Mussel losses peak after relaying seed on culture plots. The present paper is an attempt to examine the role of shore crab predation and initial mussel density on mussel losses in mussel bottom culture using an intertidal culture plot as a case study. Because of their small size and loose

  4. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  5. Average Stand Age from Forest Inventory Plots Does Not Describe Historical Fire Regimes in Ponderosa Pine and Mixed-Conifer Forests of Western North America.

    Directory of Open Access Journals (Sweden)

    Jens T Stevens

    Full Text Available Quantifying historical fire regimes provides important information for managing contemporary forests. Historical fire frequency and severity can be estimated using several methods; each method has strengths and weaknesses and presents challenges for interpretation and verification. Recent efforts to quantify the timing of historical high-severity fire events in forests of western North America have assumed that the "stand age" variable from the US Forest Service Forest Inventory and Analysis (FIA program reflects the timing of historical high-severity (i.e. stand-replacing fire in ponderosa pine and mixed-conifer forests. To test this assumption, we re-analyze the dataset used in a previous analysis, and compare information from fire history records with information from co-located FIA plots. We demonstrate that 1 the FIA stand age variable does not reflect the large range of individual tree ages in the FIA plots: older trees comprised more than 10% of pre-stand age basal area in 58% of plots analyzed and more than 30% of pre-stand age basal area in 32% of plots, and 2 recruitment events are not necessarily related to high-severity fire occurrence. Because the FIA stand age variable is estimated from a sample of tree ages within the tree size class containing a plurality of canopy trees in the plot, it does not necessarily include the oldest trees, especially in uneven-aged stands. Thus, the FIA stand age variable does not indicate whether the trees in the predominant size class established in response to severe fire, or established during the absence of fire. FIA stand age was not designed to measure the time since a stand-replacing disturbance. Quantification of historical "mixed-severity" fire regimes must be explicit about the spatial scale of high-severity fire effects, which is not possible using FIA stand age data.

  6. Average Stand Age from Forest Inventory Plots Does Not Describe Historical Fire Regimes in Ponderosa Pine and Mixed-Conifer Forests of Western North America.

    Science.gov (United States)

    Stevens, Jens T; Safford, Hugh D; North, Malcolm P; Fried, Jeremy S; Gray, Andrew N; Brown, Peter M; Dolanc, Christopher R; Dobrowski, Solomon Z; Falk, Donald A; Farris, Calvin A; Franklin, Jerry F; Fulé, Peter Z; Hagmann, R Keala; Knapp, Eric E; Miller, Jay D; Smith, Douglas F; Swetnam, Thomas W; Taylor, Alan H

    Quantifying historical fire regimes provides important information for managing contemporary forests. Historical fire frequency and severity can be estimated using several methods; each method has strengths and weaknesses and presents challenges for interpretation and verification. Recent efforts to quantify the timing of historical high-severity fire events in forests of western North America have assumed that the "stand age" variable from the US Forest Service Forest Inventory and Analysis (FIA) program reflects the timing of historical high-severity (i.e. stand-replacing) fire in ponderosa pine and mixed-conifer forests. To test this assumption, we re-analyze the dataset used in a previous analysis, and compare information from fire history records with information from co-located FIA plots. We demonstrate that 1) the FIA stand age variable does not reflect the large range of individual tree ages in the FIA plots: older trees comprised more than 10% of pre-stand age basal area in 58% of plots analyzed and more than 30% of pre-stand age basal area in 32% of plots, and 2) recruitment events are not necessarily related to high-severity fire occurrence. Because the FIA stand age variable is estimated from a sample of tree ages within the tree size class containing a plurality of canopy trees in the plot, it does not necessarily include the oldest trees, especially in uneven-aged stands. Thus, the FIA stand age variable does not indicate whether the trees in the predominant size class established in response to severe fire, or established during the absence of fire. FIA stand age was not designed to measure the time since a stand-replacing disturbance. Quantification of historical "mixed-severity" fire regimes must be explicit about the spatial scale of high-severity fire effects, which is not possible using FIA stand age data.

  7. On-Line 1D and 2D PLOT/LC-ESI-MS Using 10 μm i.d. Poly(styrene–divinylbenzene) Porous Layer Open Tubular (PLOT) Columns For Ultrasensitive Proteomic Analysis

    Science.gov (United States)

    Luo, Quanzhou; Yue, Guihua; Valaskovic, Gary A; Gu, Ye; Wu, Shiaw-Lin; Karger, Barry L.

    2008-01-01

    Following on our recent work, on-line one dimensional (1D) and two dimensional (2D) PLOT/LC-ESI-MS platforms using 3.2 m × 10 μm i.d. poly(styrenedivinylbenzene) (PS-DVB) porous layer open tubular (PLOT) columns have been developed to provide robust, high performance and ultrasensitive proteomic analysis. Using a PicoClear tee, the dead volume connection between a 50 μm i.d. PS-DVB monolithic microSPE column and the PLOT column was minimized. The microSPE/PLOT column assembly provided a separation performance similar to that obtained with direct injection onto the PLOT column at a mobile phase flow rate of 20 nL/min. The trace analysis potential of the platform was evaluated using an in-gel tryptic digest sample of a gel fraction (15 to 40 kDa) of a cervical cancer (SiHa) cell line. As an example of the sensitivity of the system, ∼2.5 ng of protein in 2 μL solution, an amount corresponding to 20 SiHa cells, was subjected to on-line microSPE-PLOT/LC-ESIMS/MS analysis using a linear ion trap MS. 237 peptides associated with 163 unique proteins were identified from a single analysis when using stringent criteria associated with a false positive rate less than 1% . The number of identified peptides and proteins increased to 638 and 343, respectively, as the injection amount was raised to ∼45 ng of protein, an amount corresponding to 350 SiHa cells. In comparison, only 338 peptides and 231 unique proteins were identified (false positive rate again less than 1%) from 750 ng of protein from the identical gel fraction, an amount corresponding to 6000 SiHa cells, using a typical 15 cm × 75 μm i.d. packed capillary column. The greater sensitivity, higher recovery, and higher resolving power of the PLOT column resulted in the increased number of identifications from only ∼5% of the injected sample amount. The resolving power of the microSPE/PLOT assembly was further extended by 2D chromatography via combination of the high-efficiency reversed phase PLOT column

  8. [Effect of comprehensive control and prevention for chronic disease in demonstration plot of Chongqing].

    Science.gov (United States)

    Qi, Li; Ding, Xian-bin; Mao, De-qiang; Feng, Lian-gui; Wang, Yu-lin; Jiao, Yan; Zhang, Chun-hua; Lü, Xiao-yan; Li, Hong; Xia, Yi-yin

    2013-03-01

    To evaluate the effect of comprehensive control and prevention for chronic diseases in demonstration plot of Chongqing. Residents were enrolled through multi-stage stratified random sampling method from 17 districts or counties which had successfully established demonstration plots and 21 districts or counties which had not established demonstration plots (non-demonstration plot for short) yet on May, 2012. Questionnaire was designed to survey awareness of health knowledge, health behaviors and utilization of health supportive tools. The results were analyzed by SPSS 15.0 software. We investigated 15 108 residents, 6156 of which were in demonstration plot and others (8951) were not. The findings revealed the percentage of the people who were aware the national action of health lifestyle in demonstration plot and in non-demonstration plot were 44.4% (2734/6157) and 40.2% (3598/8951), respectively, and the awareness of the hypertension risk of too much sodium were 72.4% (4458/6156) and 67.5% (6042/8951), respectively, and the awareness of the cardinal vascular disease (CVD) risk of obesity and overweight were 77.2% (4753/6157) and 69.6% (6230/8951), respectively. About the residents' health behaviors in demonstration plot and in non-demonstration plot, the utilization rates of salt restriction scoop or pot were 23.5% (1447/6157) and 17.9% (1602/8951), and the utilization rates of oil restriction pot were 16.7% (1028/6157) and 11.8% (1064/8951), respectively. Totally, 33 of the 37 indexes were shown higher in demonstration plot than that in non-demonstration plot (P plot was more effective, and the remarkable improvement of health knowledge and behaviors level had been achieved in demonstration plot.

  9. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  10. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  11. A Comparison of Hazard Prediction and Assessment Capability (HPAC) Software Dose-Rate Contour Plots to a Sample of Local Fallout Data From Test Detonations in the Continental United States, 1945 - 1962

    National Research Council Canada - National Science Library

    Chancellor, Richard W

    2005-01-01

    A comparison of Hazard Prediction and Assessment Capability (HPAC) software dose-rate contour plots to a sample of local nuclear fallout data from test detonations in the continental United States, 1945 - 1962, is performed...

  12. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  13. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  14. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  15. PLOT-3D, Graphics Subroutines for 3-D Surface Plots with Arbitrary Rotations

    International Nuclear Information System (INIS)

    Basinger, D.; Gvildys, J.

    1975-01-01

    1 - Description of problem or function: PLOT-3D is a package of sub- programs designed to draw three-dimensional surfaces from arrays of points (x,y,z). The surfaces can be drawn after arbitrary rotations about the three coordinate axes. 2 - Method of solution: PLOT-3D is a computer program to plot any surface for which each coordinate pair (x,y) is associated with a unique z in the set of points (x,y,z). It uses matrix transformation of the points to generate different views of the surface after arbitrary rotations about the three coordinate axes. Four versions of PLOT-3D are available. Output of version 1 and 3 is by film recorder. Output of version 2 and 4 is by CalComp plotter. Versions 3 and 4 do not draw lines which would be invisible to a viewer looking at an opaque surface, whereas versions 1 and 2 draw every line on the surface. 3 - Restrictions on the complexity of the problem: Versions 3 and 4 limit number of rows in arrays (x,y,z) to 100 and also number of columns in arrays (x,y,z) to 100

  16. Grain size statistics and depositional pattern of the Ecca Group sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa

    Directory of Open Access Journals (Sweden)

    Baiyegunhi Christopher

    2017-11-01

    Full Text Available Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicate the dominance of low energy environment. The bivariate plots show that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are lacustrine or deltaic deposits. The C-M plots indicated that the sediments were deposited mainly by suspension and saltation, and graded suspension. Visher diagrams show that saltation is the major process of transportation, followed by suspension.

  17. Grain size statistics and depositional pattern of the Ecca Group sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa

    Science.gov (United States)

    Baiyegunhi, Christopher; Liu, Kuiwu; Gwavava, Oswald

    2017-11-01

    Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicate the dominance of low energy environment. The bivariate plots show that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are lacustrine or deltaic deposits. The C-M plots indicated that the sediments were deposited mainly by suspension and saltation, and graded suspension. Visher diagrams show that saltation is the major process of transportation, followed by suspension.

  18. Plot 5

    DEFF Research Database (Denmark)

    Lund, Inger-Lise; Hermansen, Anne-Mette; Ferdinand, Trine

    Danskfaglig vejledning informerer om de tanker, der ligger til grund for Plot 5, og beskriver danskfaget ud fra nyere forskning inden for fagets mange delområder. Kapitelvejledning introducerer bogens tekster, forklarer hensigten med kapitlernes opgaver, giver forslag til undervisningen og brugen...

  19. Plot 4

    DEFF Research Database (Denmark)

    Lund, Inger-Lise; Hermansen, Anne-Mette; Ferdinand, Trine

    Danskfaglig vejledning informerer om de tanker, der ligger til grund for Plot 4, og beskriver danskfaget ud fra nyere forskning inden for fagets mange delområder. Kapitelvejledning introducerer bogens tekster, forklarer hensigten med kapitlernes opgaver, giver forslag til undervisningen og brugen...

  20. Correlation plot facility in the SLC control system

    International Nuclear Information System (INIS)

    Hendrickson, L.; Phinney, N.; Sanchez-Chopitea, L.

    1991-05-01

    The Correlation Plot facility is a powerful interactive tool for data acquisition and analysis throughout the SLC. A generalized interface allows the user to perform a wide variety of machine physics experiments without the need for specialized software. It has been used extensively during SLC commissioning and operation. The user may step one or two independent parameters such as magnet or feedback setpoints while measuring or calculating up to 160 others. Measured variables include all analog signals available to the control system as well as a variety of derived parameters such as beam size or emittance. Various fitting algorithms and display options are provided for data analysis. A software-callable interface is also provided. Applications based on this facility are used to phase klystrons, measure emittance and dispersion, minimize beam size at the interaction point and maintain beam collisions. 4 refs., 3 figs

  1. Correlation Plot facility in the SLC control system

    International Nuclear Information System (INIS)

    Hendrickson, L.; Phinney, N.; Sanchez-Chopitea, L.; Clark, S.

    1991-11-01

    The Correlation Plot facility is a powerful interactive tool for data acquisition and analysis throughout the SLC. This generalized interface allows the user to perform a range of operations or machine physics experiments without the need for any specialized analysis software. The user may step one or more independent parameters, such as magnet or feedback setpoints, while measuring or calculating up to 160 other parameters. Measured variables include all analog signals available to the control system, as well as calculated parameters such as beam size, luminosity, or emittance. Various fitting algorithms and display options are provided. A software-callable interface has been provided so that a host of applications can call this package for analysis and display. Such applications regularly phase klystrons, measure emittance and dispersion, minimize beam size, and maintain beam collisions at the interaction point. 4 refs., 5 figs

  2. Correlation Plot facility in the SLC control system

    International Nuclear Information System (INIS)

    Hendrickson, L.; Phinney, N.; Sachez-Chopitea, L.; Clark, S.

    1992-01-01

    The Correlation Plot facility is a powerful interactive tool for data acquisition and analysis throughout the SLC. This generalized interface allows the user to perform a range of operations or machine physics experiments without the need for any specialized analysis software. The user may step one or more independent parameters, such as magnet or feedback set points, while measuring or calculating up to 160 other parameters. Measured variables include all analog signals available to the control system, as well as calculated parameters such as beam size, luminosity, or emittance. Various fitting algorithms and display options are provided. A software-callable interface has been provided so that a host of applications can call this package for analysis and display. Such applications regularly phase klystrons, measure emittance and dispersion, minimize beam size, and maintain beam collisions at the interaction point. (author)

  3. Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils

    International Nuclear Information System (INIS)

    Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.

    1977-01-01

    Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)

  4. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  5. The scale effect on soil erosion. A plot approach to understand connectivity on slopes under cultivation at variable plot sizes and under Mediterranean climatic conditions

    Science.gov (United States)

    Cerdà, Artemi; Bagarello, Vicenzo; Ferro, Vito; Iovino, Massimo; Borja, Manuel Estaban Lucas; Francisco Martínez Murillo, Juan; González Camarena, Rafael

    2017-04-01

    It is well known that soil erosion changes along time and seasons and attention was paid to this issue in the past (González Hidalgo et al., 2010; 2012). However, although the scientific community knows that soil erosion is also a time spatial scale-scale dependent process (Parsons et al., 1990; Cerdà et al., 2009; González Hidalgo et al., 2013; Sadeghi et al., 2015) very little is done on this topic. This is due to the fact that at different scales, different soil erosion mechanisms (splash, sheetflow, rill development) are active and their rates change with the scale of measurement (Wainwright et al., 2002; López-Vicente et al., 2015). This is making the research on soil erosion complex and difficult, and it is necessary to develop a conceptual framework but also measurements that will inform about the soil erosion behaviour. Connectivity is the key concept to understand how changes in the scale results in different rates of soil and water losses (Parsons et al., 1996; Parsons et al., 2015; Poeppl et al., 2016). Most of the research developed around the connectivity concept was applied in watershed or basin scales (Galdino et al., 2016; Martínez-Casasnovas et al., 2016; López Vicente et al., 2016; Marchamalo et al., 2015; Masselink et al., 2016), but very little is known about the connectivity issue at slope scale (Cerdà and Jurgensen, 2011). El Teularet (Eastern Iberian Peninsula) and Sparacia (Sicily) soil erosion experimental stations are being active for 15 years and data collected on different plots sizes can shed light into the effect of scale on runoff generation and soil losses at different scales and give information to understand how the transport of materials is determined by the connectivity between pedon to slope scale (Cerdà et al., 2014; Bagarello et al., 2015a; 2015b). The comparison of the results of the two research stations will shed light into the rates of soil erosion and mechanisms involved that act under different scales. Our

  6. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Graphical Data Analysis on the Circle: Wrap-Around Time Series Plots for (Interrupted) Time Series Designs.

    Science.gov (United States)

    Rodgers, Joseph Lee; Beasley, William Howard; Schuelke, Matthew

    2014-01-01

    Many data structures, particularly time series data, are naturally seasonal, cyclical, or otherwise circular. Past graphical methods for time series have focused on linear plots. In this article, we move graphical analysis onto the circle. We focus on 2 particular methods, one old and one new. Rose diagrams are circular histograms and can be produced in several different forms using the RRose software system. In addition, we propose, develop, illustrate, and provide software support for a new circular graphical method, called Wrap-Around Time Series Plots (WATS Plots), which is a graphical method useful to support time series analyses in general but in particular in relation to interrupted time series designs. We illustrate the use of WATS Plots with an interrupted time series design evaluating the effect of the Oklahoma City bombing on birthrates in Oklahoma County during the 10 years surrounding the bombing of the Murrah Building in Oklahoma City. We compare WATS Plots with linear time series representations and overlay them with smoothing and error bands. Each method is shown to have advantages in relation to the other; in our example, the WATS Plots more clearly show the existence and effect size of the fertility differential.

  8. Chromatographic properties PLOT multicapillary columns.

    Science.gov (United States)

    Nikolaeva, O A; Patrushev, Y V; Sidelnikov, V N

    2017-03-10

    Multicapillary columns (MCCs) for gas chromatography make it possible to perform high-speed analysis of the mixtures of gaseous and volatile substances at a relatively large amount of the loaded sample. The study was performed using PLOT MCCs for gas-solid chromatography (GSC) with different stationary phases (SP) based on alumina, silica and poly-(1-trimethylsilyl-1-propyne) (PTMSP) polymer as well as porous polymers divinylbenzene-styrene (DVB-St), divinylbenzene-vinylimidazole (DVB-VIm) and divinylbenzene-ethylene glycol dimethacrylate (DVB-EGD). These MCCs have the efficiency of 4000-10000 theoretical plates per meter (TP/m) and at a column length of 25-30cm can separate within 10-20s multicomponent mixtures of substances belonging to different classes of chemical compounds. The sample amount not overloading the column is 0.03-1μg and depends on the features of a porous layer. Examples of separations on some of the studied columns are considered. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  10. Evaluation of a Simple, Small-Plot Meteorological Technique for Measurement of Ammonia Emission: Feasibility, Costs, and Recommendations

    DEFF Research Database (Denmark)

    Pedersen, Simon Vilms; di Perta, Ester Scotto; Hafner, Sasha D.

    2018-01-01

    are regularly developed, and their efficacy needs to be tested using accurate methods. To date, a major obstacle to many available emission measurement techniques is the requirement of large plot sizes of homogeneous surface characteristics, which particularly is a challenge to the number of plot......-level replicates that can be carried out on a field providing uniform surface characteristics throughout. The objectives of this research were to test three different methods for measuring NH3 flux when applied to small plots (methods and to determine...... techniques, wind tunnels measuring gas-phase ammonia using ALPHA passive diffusion samplers and a flux chamber method using Dräger tubes for measurements of ammonia concentration (DTM) were used. As an inexpensive alternative small-plot method, we studied the feasibility of applying ALPHA passive diffusion...

  11. Reporting of perinatal health indicators for international comparisons: enhancing the appearance of geographical plots

    NARCIS (Netherlands)

    Lack, N.; Blondel, B.; Mohangoo, A.D.; Sakkeus, L.; Cans, C.; Bouvier-Colle, M.H.; Macfarlane, A.; Zeitlin, J.

    2013-01-01

    Background: Tabulating annual national health indicators sorted by outcome may be misleading for two reasons. The implied rank order is largely a result of heterogeneous population sizes. Distinctions between geographically adjacent regions are not visible. Methods: Regional data are plotted in a

  12. Representing uncertainty on model analysis plots

    Directory of Open Access Journals (Sweden)

    Trevor I. Smith

    2016-09-01

    Full Text Available Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model. Unfortunately, Bao’s original presentation of the model plot did not include a way to represent uncertainty in these measurements. I present details of a method to add error bars to model plots by expanding the work of Sommer and Lindell. I also provide a template for generating model plots with error bars.

  13. Tamanho da amostra para avaliação de famílias de meios-irmãos de milho Sample size for half-sib family evaluation in maize

    Directory of Open Access Journals (Sweden)

    EDWIN CAMACHO PALOMINO

    2000-07-01

    Full Text Available Este trabalho teve como objetivo verificar o efeito do número de plantas por parcela na avaliação de famílias de meios-irmãos de milho. Para isso, 25 famílias da população CMS-39 foram avaliadas utilizando um látice 5 x 5 com duas repetições. Cada parcela era constituída por três linhas com 10 m de comprimento. Cada parcela foi subdividida em estratos de 1 m com cinco plantas; associando os estratos contíguos, foi possível obter número de plantas por parcela, que variou de 5 a 135. Utilizando esses diferentes tamanhos de parcelas, foram realizadas 270 análises de variância quanto ao caráter peso da espiga despalhada. A partir dessas análises, foram estimados os parâmetros genéticos e fenotípicos com os seus respectivos erros e simulando o ganho esperado com a seleção. Constatou-se que quanto maior o número de plantas, mais precisos foram os experimentos; as parcelas contendo o mesmo número de plantas, porém distribuídos em duas ou três linhas, propiciam maior precisão experimental; o ganho esperado com a seleção decresce com o aumento do número de plantas utilizadas, por parcela.This work had the aim to verify the effect of the number of plants per plot in the evaluation of half-sib families. For that, 25 half-sib families of CMS-39 were evaluated with the experimental design being a simple lattice, 5 x 5, with two replications. Each plot was constituted by three lines with 10 m of length. Each plot was subdivided in sub-plots with five plants, being discarded extremities. Associating the contiguous sub-plots it was possible to simulate 270 variance analyses, with the number of plants per plot varying from 5 to 135. Genetic and phenotypic parameters were estimated with their respective errors. Expected selection response yield was also simulated for each sample size. In this simulation, it was verified that the number of plants per plot affected experimental precision. As larger was the number of plants, more

  14. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  15. Presenting simulation results in a nested loop plot.

    Science.gov (United States)

    Rücker, Gerta; Schwarzer, Guido

    2014-12-12

    Statisticians investigate new methods in simulations to evaluate their properties for future real data applications. Results are often presented in a number of figures, e.g., Trellis plots. We had conducted a simulation study on six statistical methods for estimating the treatment effect in binary outcome meta-analyses, where selection bias (e.g., publication bias) was suspected because of apparent funnel plot asymmetry. We varied five simulation parameters: true treatment effect, extent of selection, event proportion in control group, heterogeneity parameter, and number of studies in meta-analysis. In combination, this yielded a total number of 768 scenarios. To present all results using Trellis plots, 12 figures were needed. Choosing bias as criterion of interest, we present a 'nested loop plot', a diagram type that aims to have all simulation results in one plot. The idea was to bring all scenarios into a lexicographical order and arrange them consecutively on the horizontal axis of a plot, whereas the treatment effect estimate is presented on the vertical axis. The plot illustrates how parameters simultaneously influenced the estimate. It can be combined with a Trellis plot in a so-called hybrid plot. Nested loop plots may also be applied to other criteria such as the variance of estimation. The nested loop plot, similar to a time series graph, summarizes all information about the results of a simulation study with respect to a chosen criterion in one picture and provides a suitable alternative or an addition to Trellis plots.

  16. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  17. Contour plotting programs for printer and Calcomp plotter

    International Nuclear Information System (INIS)

    Moller, P.

    1980-07-01

    Contour plotting programs for plotting contour diagrams on printers or Calcomp plotters are described. The subroutines also exist in versions that are useful for the special application of finding minima and saddlepoints of nuclear potential energy surfaces generated by the subroutine PETR3 of another program package. For the general user, however, the most interesting aspect of the plotting package is probably the possibility of generating printer contour plots. The plotting of printer contour plots is a very fast and convenient way of displaying two-dimensional functions. 3 figures

  18. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    Smith, D.L.

    1975-09-01

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  19. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Science.gov (United States)

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  20. Plot til lyst

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    Den velfungerende krimi faciliterer denne dobbelte plotlæsning ved at muliggøre en særlig form for legende og udforskende interaktion mellem læser og plot: Ved at lægge spor ud og holde tolkningsmuligheder og løsningsmuligheder åbne får vi mulighed for at påtage os og udføre opklaringsarbejdet side......, der inviterer os med ind i selve handlingens rum og forløb og giver os forskellige handlingsmuligheder i forhold til disse. I bogen omtales denne særlige form for plot for forlystelsesplot med henvisning til forlystelsesparken og den særlige form for interaktiv fortælleform, som vi finder der: en...

  1. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  2. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  3. Four soil orders on a Vermont mountaintop-one-third of the world`s soil orders in a 2500-square-meter research plot

    Science.gov (United States)

    Thomas R. Villars; Scott W. Bailey; Donald S. Ross

    2015-01-01

    As part of the Vermont Long-Term Soil Monitoring Project, five 50 x 50 m plots were established on protected forestland across Vermont. In 2002, ten randomly selected subplots at each monitoring plot were sampled. The 10 pedons sampled at the high-elevation spruce-fir “Forehead” plot on Mount Mansfield were found to include soils of four taxonomic Orders: Entisols,...

  4. Radiological site assessment at sun rose claim utilizing ScanPlot{sup SM} technology

    Energy Technology Data Exchange (ETDEWEB)

    Downey, H., E-mail: heath.downey@amecfw.com [Amec Foster Wheeler, Portland, ME (United States)

    2015-07-01

    ScanPlot{sup SM} gamma spectroscopy land survey system was utilized for the overland survey of uranium at the Sun Rose Claim in the Northwest Territories. The Sun Rose Claim is a former uranium exploration site and previous investigations had identified uranium ore and waste rock. ScanPlot{sup SM} radiological scan surveys were performed utilizing a backpack system. ScanPlot{sup SM} platform utilized spectroscopy grade sodium iodide detectors configured for optimal spatial coverage and radiation detection. Survey locations were recorded using an on-board global positioning system (GPS). The radiological spectral data from the radiation detectors is automatically logged and linked with the GPS coordinates to an on-board computer to create isocontour figures using a color scale to represent radioactivity levels. The advantage of utilizing the ScanPlot{sup SM} system for this assessment is that the nature and extent of uranium is provided without having to collect and assay a large number of samples. (author)

  5. Multiple hypothesis clustering in radar plot extraction

    NARCIS (Netherlands)

    Huizing, A.G.; Theil, A.; Dorp, Ph. van; Ligthart, L.P.

    1995-01-01

    False plots and plots with inaccurate range and Doppler estimates may severely degrade the performance of tracking algorithms in radar systems. This paper describes how a multiple hypothesis clustering technique can be applied to mitigate the problems involved in plot extraction. The measures of

  6. Storytelling in Earth sciences: The eight basic plots

    Science.gov (United States)

    Phillips, Jonathan

    2012-11-01

    Reporting results and promoting ideas in science in general, and Earth science in particular, is treated here as storytelling. Just as in literature and drama, storytelling in Earth science is characterized by a small number of basic plots. Though the list is not exhaustive, and acknowledging that multiple or hybrid plots and subplots are possible in a single piece, eight standard plots are identified, and examples provided: cause-and-effect, genesis, emergence, destruction, metamorphosis, convergence, divergence, and oscillation. The plots of Earth science stories are not those of literary traditions, nor those of persuasion or moral philosophy, and deserve separate consideration. Earth science plots do not conform those of storytelling more generally, implying that Earth scientists may have fundamentally different motivations than other storytellers, and that the basic plots of Earth Science derive from the characteristics and behaviors of Earth systems. In some cases preference or affinity to different plots results in fundamentally different interpretations and conclusions of the same evidence. In other situations exploration of additional plots could help resolve scientific controversies. Thus explicit acknowledgement of plots can yield direct scientific benefits. Consideration of plots and storytelling devices may also assist in the interpretation of published work, and can help scientists improve their own storytelling.

  7. Experimental burn plot trial in the Kruger National Park: history, experimental design and suggestions for data analysis

    Directory of Open Access Journals (Sweden)

    R. Biggs

    2003-12-01

    Full Text Available The experimental burn plot (EBP trial initiated in 1954 is one of few ongoing long-termfire ecology research projects in Africa. The trial aims to assess the impacts of differentfire regimes in the Kruger National Park. Recent studies on the EBPs have raised questions as to the experimental design of the trial, and the appropriate model specificationwhen analysing data. Archival documentation reveals that the original design was modified on several occasions, related to changes in the park's fire policy. These modifications include the addition of extra plots, subdivision of plots and changes in treatmentsover time, and have resulted in a design which is only partially randomised. The representativity of the trial plots has been questioned on account of their relatively small size,the concentration of herbivores on especially the frequently burnt plots, and soil variation between plots. It is suggested that these factors be included as covariates inexplanatory models or that certain plots be excluded from data analysis based on resultsof independent studies of these factors. Suggestions are provided for the specificationof the experimental design when analysing data using Analysis of Variance. It is concluded that there is no practical alternative to treating the trial as a fully randomisedcomplete block design.

  8. Vessel Sampling and Blood Flow Velocity Distribution With Vessel Diameter for Characterizing the Human Bulbar Conjunctival Microvasculature.

    Science.gov (United States)

    Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua

    2016-03-01

    This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, Psampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.

  9. Measurement of the ω → π+π-π0 Dalitz plot distribution

    Science.gov (United States)

    Adlarson, P.; Augustyniak, W.; Bardan, W.; Bashkanov, M.; Bergmann, F. S.; Berłowski, M.; Bhatt, H.; Bondar, A.; Büscher, M.; Calén, H.; Ciepał, I.; Clement, H.; Czerwiński, E.; Demmich, K.; Engels, R.; Erven, A.; Erven, W.; Eyrich, W.; Fedorets, P.; Föhl, K.; Fransson, K.; Goldenbaum, F.; Goswami, A.; Grigoryev, K.; Gullström, C.-O.; Heijkenskjöld, L.; Hejny, V.; Hüsken, N.; Jarczyk, L.; Johansson, T.; Kamys, B.; Kemmerling, G.; Khan, F. A.; Khatri, G.; Khoukaz, A.; Khreptak, O.; Kirillov, D. A.; Kistryn, S.; Kleines, H.; Kłos, B.; Krzemień, W.; Kulessa, P.; Kupść, A.; Kuzmin, A.; Lalwani, K.; Lersch, D.; Lorentz, B.; Magiera, A.; Maier, R.; Marciniewski, P.; Mariański, B.; Morsch, H.-P.; Moskal, P.; Ohm, H.; Perez del Rio, E.; Piskunov, N. M.; Prasuhn, D.; Pszczel, D.; Pysz, K.; Pyszniak, A.; Ritman, J.; Roy, A.; Rudy, Z.; Rundel, O.; Sawant, S.; Schadmand, S.; Schätti-Ozerianska, I.; Sefzick, T.; Serdyuk, V.; Shwartz, B.; Sitterberg, K.; Skorodko, T.; Skurzok, M.; Smyrski, J.; Sopov, V.; Stassen, R.; Stepaniak, J.; Stephan, E.; Sterzenbach, G.; Stockhorst, H.; Ströher, H.; Szczurek, A.; Trzciński, A.; Varma, R.; Wolke, M.; Wrońska, A.; Wüstner, P.; Yamamoto, A.; Zabierowski, J.; Zieliński, M. J.; Złomańczuk, J.; Żuprański, P.; Żurek, M.; Kubis, B.; Leupold, S.

    2017-07-01

    Using the production reactions pd →3He ω and pp → ppω, the Dalitz plot distribution for the ω →π+π-π0 decay is studied with the WASA detector at COSY, based on a combined data sample of (4.408 ± 0.042) ×104 events. The Dalitz plot density is parametrised by a product of the P-wave phase space and a polynomial expansion in the normalised polar Dalitz plot variables Z and ϕ. For the first time, a deviation from pure P-wave phase space is observed with a significance of 4.1σ. The deviation is parametrised by a linear term 1 + 2 αZ, with α determined to be + 0.147 ± 0.036, consistent with the expectations of ρ-meson-type final-state interactions of the P-wave pion pairs.

  10. Vegetation resurvey is robust to plot location uncertainty

    Science.gov (United States)

    Kopecký, Martin; Macek, Martin

    2017-01-01

    Aim Resurveys of historical vegetation plots are increasingly used for the assessment of decadal changes in plant species diversity and composition. However, historical plots are usually relocated only approximately. This potentially inflates temporal changes and undermines results. Location Temperate deciduous forests in Central Europe. Methods To explore if robust conclusions can be drawn from resurvey studies despite location uncertainty, we compared temporal changes in species richness, frequency, composition and compositional heterogeneity between exactly and approximately relocated plots. We hypothesized that compositional changes should be lower and changes in species richness should be less variable on exactly relocated plots, because pseudo-turnover inflates temporal changes on approximately relocated plots. Results Temporal changes in species richness were not more variable and temporal changes in species composition and compositional heterogeneity were not higher on approximately relocated plots. Moreover, the frequency of individual species changed similarly on both plot types. Main conclusions The resurvey of historical vegetation plots is robust to uncertainty in original plot location and, when done properly, provides reliable evidence of decadal changes in plant communities. This provides important background for other resurvey studies and opens up the possibility for large-scale assessments of plant community change. PMID:28503083

  11. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  13. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  14. Sampling the Mouse Hippocampal Dentate Gyrus

    Directory of Open Access Journals (Sweden)

    Lisa Basler

    2017-12-01

    Full Text Available Sampling is a critical step in procedures that generate quantitative morphological data in the neurosciences. Samples need to be representative to allow statistical evaluations, and samples need to deliver a precision that makes statistical evaluations not only possible but also meaningful. Sampling generated variability should, e.g., not be able to hide significant group differences from statistical detection if they are present. Estimators of the coefficient of error (CE have been developed to provide tentative answers to the question if sampling has been “good enough” to provide meaningful statistical outcomes. We tested the performance of the commonly used Gundersen-Jensen CE estimator, using the layers of the mouse hippocampal dentate gyrus as an example (molecular layer, granule cell layer and hilus. We found that this estimator provided useful estimates of the precision that can be expected from samples of different sizes. For all layers, we found that a smoothness factor (m of 0 generally provided better estimates than an m of 1. Only for the combined layers, i.e., the entire dentate gyrus, better CE estimates could be obtained using an m of 1. The orientation of the sections impacted on CE sizes. Frontal (coronal sections are typically most efficient by providing the smallest CEs for a given amount of work. Applying the estimator to 3D-reconstructed layers and using very intense sampling, we observed CE size plots with m = 0 to m = 1 transitions that should also be expected but are not often observed in real section series. The data we present also allows the reader to approximate the sampling intervals in frontal, horizontal or sagittal sections that provide CEs of specified sizes for the layers of the mouse dentate gyrus.

  15. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  16. Sample size for monitoring sirex populations and their natural enemies

    Directory of Open Access Journals (Sweden)

    Susete do Rocio Chiarello Penteado

    2016-09-01

    Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.

  17. 9 CFR 108.3 - Preparation of plot plans.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Preparation of plot plans. 108.3... LICENSED ESTABLISHMENTS § 108.3 Preparation of plot plans. Plot plans shall show all of the buildings on a... on the plot plan the use of immediate adjacent properties such as, residential area, pasture, box...

  18. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  19. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  20. FLOWPLOT2, 2-D, 3-D Fluid Dynamic Plots

    International Nuclear Information System (INIS)

    Cobb, C.K.; Tunstall, J.N.

    1989-01-01

    1 - Description of program or function: FLOWPLOT2 is a plotting program used with numerical or analytical fluid dynamics codes to create velocity vector plots, contour plots of up to three fluid parameters (e.g. pressure, density, and temperature), two-dimensional profile plots, three-dimensional curve plots, and/or three-dimensional surface plots for either the u or v velocity components. If the fluid dynamics code computes a transient or simulated time related solution, FLOWPLOT2 can also be used to generate these plots for any specified time interval. Multiple cases generating different plots for different time intervals may be run in one execution of the program. In addition, plots can be created for selected two- dimensional planes of three-dimensional steady-state problems. The user has the option of producing plots on CalComp or Versatec plotters or microfiche and of creating a compressed dataset before plotting. 2 - Method of solution: FLOWPLOT2 reads a dataset written by the fluid dynamics code. This dataset must be written in a specified format and must contain parametric data at the nodal points of a uniform or non-uniform rectangular grid formed by the intersection of the grid lines of the model. 3 - Restrictions on the complexity of the problem - Maxima of: 2500 nodes, 40 y-values for 2-D profile plots and 3-D curve plots, 20 contour values, 3 fluid parameters

  1. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. EVALPLOT2007, ENDF Plots Cross Section, Angular Distribution and Energy Distribution

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: EVALPLOT is designed to plot evaluated cross sections in the ENDF/B format. The program plots cross sections, angular distributions, energy distributions and other parameters. IAEA1322/16: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: Evalplot Vers. 2007-1 (Jan. 2007): - checked against all ENDF/B-VII; - increased page size from 600,000 to 2,400,000; - increased the number of energies vs. legendre coefficients from 20,000 to 80,000 (must be 1/30 page size); - added (n,remainder) to first plot. 2 - Method of solution: In the case of processing neutron and photon cross sections (MF=3 or 23) and parameters (MF=1 or 27), all data in a file (MF) is read, grouped together by type, and plotted. All reactions of a data type appear on the same plot. The data types for MF=1 and 3 (neutrons) are: (1) total, elastic, capture, fission and total inelastic; (2) (n,2n), (n,3n) and (n,n' charged particle); (3) (n,charged particle); (4) particle production (proton, deuteron, etc.) and damage; (5) total, first, second, etc. chance fission; (6) total inelastic, inelastic discrete levels and continuum; (7) (n,p) total and levels (only if levels are given); (8) (n,d) total and levels (only if levels are given); (9) (n,t) total and levels (only if levels are given); (10) (n, 3 He) total and levels (only if levels are given); (11) (n, 4 He) total and levels (only if levels are given); (12) parameters mu-bar, xi and gamma; (13) nu-bar - total, prompt an delayed. The data types for MF=23 and 27 (photons) are: (14) total, coherent

  3. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  4. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  5. Evaluating Plot Designs for the Tropics

    Science.gov (United States)

    Paul C. van Deusen; Bruce Bayle

    1991-01-01

    Theory and procedures are reviewed for determining the best type of plot for a given forest inventory. A general methodology is given that clarifies the relationship between different plot designs and the associated methods to produce the inventory estimates.

  6. Emotional characters for automatic plot creation

    NARCIS (Netherlands)

    Theune, Mariet; Rensen, S.; op den Akker, Hendrikus J.A.; Heylen, Dirk K.J.; Nijholt, Antinus; Göbel, S.; Spierling, U.; Hoffmann, A.; Iurgel, I.; Schneider, O.; Dechau, J.; Feix, A.

    The Virtual Storyteller is a multi-agent framework for automatic story generation. In this paper we describe how plots emerge from the actions of semi-autonomous character agents, focusing on the influence of the characters’ emotions on plot development.

  7. The interpretation of quartz optically stimulated luminescence equivalent dose versus time plots

    International Nuclear Information System (INIS)

    Bailey, R.M.

    2000-01-01

    Numerical modelling has shown that the form of the quartz OSL shine plateau (hereafter 'D e (t)-plot') is influenced by the effects of phototransferred TL in the ∼110 deg. C region. It is suggested also that the presence of multiple OSL components (as described by Partial bleaching and the decay form characteristics of quartz OSL. Radiat. Meas., 27, 123-136. The form of the optically stimulated luminescence signal of quartz: implications of dating. Unpublished PhD thesis, University of London) affects the form of the D e (t)-plot. Laboratory measurements of a fully reset and artificially dosed sample yielded non-flat D e (t)-plots, the deviation being greater for the larger of the two simulated palaeodoses, in accordance with theoretical predictions. It is suggested that the so-called 'shine plateau' test is of limited use in assessing the bleaching history of quartz sediments

  8. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  9. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  10. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  11. Comparison of fish-community size spectra based on length ...

    African Journals Online (AJOL)

    Estimates of fish-community size spectra are promising indicators of the impact of fishing on fish assemblages. Size spectra consist of logarithmic graphs of abundance plotted against fish body size. Size spectra may either be constructed from length frequency data or estimated from the mean sizes and abundances of the ...

  12. Split-plot designs for robotic serial dilution assays.

    Science.gov (United States)

    Buzas, Jeffrey S; Wager, Carrie G; Lansky, David M

    2011-12-01

    This article explores effective implementation of split-plot designs in serial dilution bioassay using robots. We show that the shortest path for a robot to fill plate wells for a split-plot design is equivalent to the shortest common supersequence problem in combinatorics. We develop an algorithm for finding the shortest common supersequence, provide an R implementation, and explore the distribution of the number of steps required to implement split-plot designs for bioassay through simulation. We also show how to construct collections of split plots that can be filled in a minimal number of steps, thereby demonstrating that split-plot designs can be implemented with nearly the same effort as strip-plot designs. Finally, we provide guidelines for modeling data that result from these designs. © 2011, The International Biometric Society.

  13. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  14. Application of mapped plots for single-owner forest surveys

    Science.gov (United States)

    Paul C. Van Deusen; Francis Roesch

    2009-01-01

    Mapped plots are used for the nation forest inventory conducted by the U.S. Forest Service. Mapped plots are also useful foro single ownership inventoires. Mapped plots can handle boundary overlap and can aprovide less variable estimates for specified forest conditions. Mapping is a good fit for fixed plot inventories where the fixed area plot is used for both mapping...

  15. TORCAPP: time-dependent cyclotron orbit calculation and plotting package

    International Nuclear Information System (INIS)

    Maddox, L.B.; McNeilly, G.S.

    1979-11-01

    TORCAPP calculates the motion of charged particles in electromagnetic fields with time as the independent variable, and produces a variety of printed and plotted output of results. Finite-size beam behavior is studied conveniently by following groups of particles which define an appropriate phase space area. Since time is the independent variable, general motion in the near-median-plane region may be followed. This includes, for example, loops not enclosing the origin and strongly radial motions. Thus, TORCAPP is particularly useful for injection studies for isochronous cyclotrons, or other devices with near-median-plane charged particle motion

  16. Surveillance of Site A and Plot M report for 1991

    International Nuclear Information System (INIS)

    Golchert, N.W.

    1992-05-01

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for CY 1991 are presented. The surveillance program is the ongoing remedial action that resulted from the 1976--1978 radiological characterization of the site. That study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand-pumped picnic wells. The current program consists of sample collection and analysis of air, surface and subsurface water, and bottom sediment. The results of the analyses are used to (1) determine the migration pathway of water from the burial ground (Plot M) to the hand-pumped picnic wells, (2) establish if buried radionuclides other than hydrogen-3 have migrated, and (3) generally characterize the radiological environment of the area. Hydrogen-3 in the Red Gate Woods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Tritiated water continues to be detected in a number of wells, boreholes, dolomite holes, and surface stream. For many years it was the only radionclide found to have migrated in measurable quantities. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of borehole next to Plot M. The available data does not allow a firm conclusion as to whether the presence of this nuclide represents recent migration or movement that may have occurred before Plot M was capped. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site or those living in the vicinity

  17. Surveillance of Site A and Plot M. Report for 1996

    International Nuclear Information System (INIS)

    Golchert, N.W.

    1997-05-01

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for 1996 are presented. The surveillance program is the ongoing remedial action that resulted from the 1976-1978 radiological characterization of the site. That study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand-pumped picnic wells. The current program consists of sample collection and analysis of air, surface and subsurface water, and bottom sediment. The results of the analyses are used to (1) monitor the migration pathway of water from the burial ground (Plot M) to the hand-pumped picnic wells, (2) establish if buried radionuclides other than hydrogen-3 have migrated, and (3) generally characterize the radiological environment of the area. Hydrogen-3 in the Red Gate Woods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Tritiated water continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. For many years it was the only radionuclide found to have migrated in measurable quantities. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The available data does not allow a firm conclusion as to whether the presence of this nuclide represents recent migration or movement that may have occurred before Plot M was capped. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site, using the picnic area, or living in the vicinity

  18. Unimodal tree size distributions possibly result from relatively strong conservatism in intermediate size classes.

    Directory of Open Access Journals (Sweden)

    Yue Bin

    Full Text Available Tree size distributions have long been of interest to ecologists and foresters because they reflect fundamental demographic processes. Previous studies have assumed that size distributions are often associated with population trends or with the degree of shade tolerance. We tested these associations for 31 tree species in a 20 ha plot in a Dinghushan south subtropical forest in China. These species varied widely in growth form and shade-tolerance. We used 2005 and 2010 census data from that plot. We found that 23 species had reversed J shaped size distributions, and eight species had unimodal size distributions in 2005. On average, modal species had lower recruitment rates than reversed J species, while showing no significant difference in mortality rates, per capita population growth rates or shade-tolerance. We compared the observed size distributions with the equilibrium distributions projected from observed size-dependent growth and mortality. We found that observed distributions generally had the same shape as predicted equilibrium distributions in both unimodal and reversed J species, but there were statistically significant, important quantitative differences between observed and projected equilibrium size distributions in most species, suggesting that these populations are not at equilibrium and that this forest is changing over time. Almost all modal species had U-shaped size-dependent mortality and/or growth functions, with turning points of both mortality and growth at intermediate size classes close to the peak in the size distribution. These results show that modal size distributions do not necessarily indicate either population decline or shade-intolerance. Instead, the modal species in our study were characterized by a life history strategy of relatively strong conservatism in an intermediate size class, leading to very low growth and mortality in that size class, and thus to a peak in the size distribution at intermediate sizes.

  19. Box Plots in the Australian Curriculum

    Science.gov (United States)

    Watson, Jane M.

    2012-01-01

    This article compares the definition of "box plot" as used in the "Australian Curriculum: Mathematics" with other definitions used in the education community; describes the difficulties students experience when dealing with box plots; and discusses the elaboration that is necessary to enable teachers to develop the knowledge…

  20. Hypothetical Outcome Plots Outperform Error Bars and Violin Plots for Inferences about Reliability of Variable Ordering.

    Science.gov (United States)

    Hullman, Jessica; Resnick, Paul; Adar, Eytan

    2015-01-01

    Many visual depictions of probability distributions, such as error bars, are difficult for users to accurately interpret. We present and study an alternative representation, Hypothetical Outcome Plots (HOPs), that animates a finite set of individual draws. In contrast to the statistical background required to interpret many static representations of distributions, HOPs require relatively little background knowledge to interpret. Instead, HOPs enables viewers to infer properties of the distribution using mental processes like counting and integration. We conducted an experiment comparing HOPs to error bars and violin plots. With HOPs, people made much more accurate judgments about plots of two and three quantities. Accuracy was similar with all three representations for most questions about distributions of a single quantity.

  1. Measurement of the ω→π+π−π0 Dalitz plot distribution

    Directory of Open Access Journals (Sweden)

    P. Adlarson

    2017-07-01

    Full Text Available Using the production reactions pd→He3ω and pp→ppω, the Dalitz plot distribution for the ω→π+π−π0 decay is studied with the WASA detector at COSY, based on a combined data sample of (4.408±0.042×104 events. The Dalitz plot density is parametrised by a product of the P-wave phase space and a polynomial expansion in the normalised polar Dalitz plot variables Z and ϕ. For the first time, a deviation from pure P-wave phase space is observed with a significance of 4.1σ. The deviation is parametrised by a linear term 1+2αZ, with α determined to be +0.147±0.036, consistent with the expectations of ρ-meson-type final-state interactions of the P-wave pion pairs.

  2. Finding Your Way out of the Forest without a Trail of Bread Crumbs: Development and Evaluation of Two Novel Displays of Forest Plots

    Science.gov (United States)

    Schild, Anne H. E.; Voracek, Martin

    2015-01-01

    Research has shown that forest plots are a gold standard in the visualization of meta-analytic results. However, research on the general interpretation of forest plots and the role of researchers' meta-analysis experience and field of study is still unavailable. Additionally, the traditional display of effect sizes, confidence intervals, and…

  3. The current state of taxation and regulation of land plot size of industrial areas in Ukraine

    Directory of Open Access Journals (Sweden)

    М.А. Malashevskyy

    2016-04-01

    Full Text Available An analysis of the domestic legal and regulatory framework for land taxation has been performed. The ratio of industry land within the land resources of Ukraine has been investigated. The legal basis for determination of land plot areas necessary for implementation of industrial activities has been considered. The legal regime of industry land has been investigated. The conclusion of the need for improvement of methods for determining excess areas for taxation purposes has been made and optimization of land use in populated areas.

  4. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  5. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  6. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  7. The Application of TD/GC/NICI-MS with an Al2O3-PLOT-S Column for the Determination of Perfluoroalkylcycloalkanes in the Atmosphere.

    Science.gov (United States)

    Ren, Yu; Schlager, Hans; Martin, Damien

    2014-01-01

    A modified method for the quantitative determination of atmospheric perfluoroalkylcycloalkanes (PFCs) using thermal desorption coupled with gas chromatography and detection by negative ion chemical ionization-mass spectrometry was developed. Using an optimized analytical system, a commercially available Al 2 O 3 porous layer open tubular (PLOT) capillary column (30 m × 0.25 mm) deactivated with Na 2 SO 4 was used for separation of PFCs. Improvements in the separation of PFCs, the corresponding identification and the limit of detection of PFCs using this method and column are presented. The method was successfully applied to determine the atmospheric background concentrations of a range of PFCs from a number of samples collected at a rural site in Germany. The results of this study suggest that the method outlined using the Al 2 O 3 -PLOT-S capillary column has good sensitivity and selectivity, and that it can be deployed in a routine laboratory process for the analysis of PFCs in the future research work. In addition, the ability of this column to separate the isomers of one of the lower boiling PFCs (perfluorodimethylcyclobutane) and its ability to resolve perfluoroethylcyclohexane offer the opportunity for single-column analysis for multiple PFCs.

  8. Split-plot fractional designs: Is minimum aberration enough?

    DEFF Research Database (Denmark)

    Kulahci, Murat; Ramirez, Jose; Tobias, Randy

    2006-01-01

    Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional...... factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two...... for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs....

  9. Spatial trends in leaf size of Amazonian rainforest trees

    Science.gov (United States)

    Malhado, A. C. M.; Malhi, Y.; Whittaker, R. J.; Ladle, R. J.; Ter Steege, H.; Phillips, O. L.; Butt, N.; Aragão, L. E. O. C.; Quesada, C. A.; Araujo-Murakami, A.; Arroyo, L.; Peacock, J.; Lopez-Gonzalez, G.; Baker, T. R.; Anderson, L. O.; Almeida, S.; Higuchi, N.; Killeen, T. J.; Monteagudo, A.; Neill, D.; Pitman, N.; Prieto, A.; Salomão, R. P.; Vásquez-Martínez, R.; Laurance, W. F.

    2009-08-01

    Leaf size influences many aspects of tree function such as rates of transpiration and photosynthesis and, consequently, often varies in a predictable way in response to environmental gradients. The recent development of pan-Amazonian databases based on permanent botanical plots has now made it possible to assess trends in leaf size across environmental gradients in Amazonia. Previous plot-based studies have shown that the community structure of Amazonian trees breaks down into at least two major ecological gradients corresponding with variations in soil fertility (decreasing from southwest to northeast) and length of the dry season (increasing from northwest to south and east). Here we describe the geographic distribution of leaf size categories based on 121 plots distributed across eight South American countries. We find that the Amazon forest is predominantly populated by tree species and individuals in the mesophyll size class (20.25-182.25 cm2). The geographic distribution of species and individuals with large leaves (>20.25 cm2) is complex but is generally characterized by a higher proportion of such trees in the northwest of the region. Spatially corrected regressions reveal weak correlations between the proportion of large-leaved species and metrics of water availability. We also find a significant negative relationship between leaf size and wood density.

  10. A new method to study ferroelectrics using the remanent Henkel plots

    Science.gov (United States)

    Vopson, Melvin M.

    2018-05-01

    Analysis of experimental curves constructed from dc demagnetization and isothermal remanent magnetization known as Henkel and delta M plots, have served for over 53 years as an important tool for characterization of interactions in ferromagnets. In this article we address the question whether the same experimental technique could be applied to the study of ferroelectric systems. The successful measurement of the equivalent dc depolarisation and isothermal remanent polarization curves and the construction of the Henkel and delta P plots for ferroelectrics is reported here. Full measurement protocol is provided together with experimental examples for two ferroelectric ceramic samples. This new measurement technique is an invaluable experimental tool that could be used to further advance our understanding of ferroelectric materials and their applications.

  11. Color palette: Plotting guide for use with GSMAP and GSDRAW digital cartographic software

    International Nuclear Information System (INIS)

    Schilling, S.P.; Thompson, R.A.

    1989-01-01

    Guidelines for plotting a variety of colors and patterns using GSMAP and GSDRAW digital cartographic programs have been developed. These color and pattern variations can be used to fill polygons (areas) on maps, charts, or diagrams. Batch processing file for plotting a sample color/pattern palette on a Hewlett Packard 7585B 8-pen plotter using GSDRAW software are provided on the disk. The detailed instructions, batch processing files, and variables used to construct the palette will provide the user ready access to 99 fill patterns, and aid in designing other useful combinations. 2 refs., 2 figs

  12. Hypothetical Outcome Plots Outperform Error Bars and Violin Plots for Inferences about Reliability of Variable Ordering.

    Directory of Open Access Journals (Sweden)

    Jessica Hullman

    Full Text Available Many visual depictions of probability distributions, such as error bars, are difficult for users to accurately interpret. We present and study an alternative representation, Hypothetical Outcome Plots (HOPs, that animates a finite set of individual draws. In contrast to the statistical background required to interpret many static representations of distributions, HOPs require relatively little background knowledge to interpret. Instead, HOPs enables viewers to infer properties of the distribution using mental processes like counting and integration. We conducted an experiment comparing HOPs to error bars and violin plots. With HOPs, people made much more accurate judgments about plots of two and three quantities. Accuracy was similar with all three representations for most questions about distributions of a single quantity.

  13. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  14. The Heuristic Interpretation of Box Plots

    Science.gov (United States)

    Lem, Stephanie; Onghena, Patrick; Verschaffel, Lieven; Van Dooren, Wim

    2013-01-01

    Box plots are frequently used, but are often misinterpreted by students. Especially the area of the box in box plots is often misinterpreted as representing number or proportion of observations, while it actually represents their density. In a first study, reaction time evidence was used to test whether heuristic reasoning underlies this…

  15. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  16. System for histogram entry, retrieval, and plotting

    International Nuclear Information System (INIS)

    Kellogg, M.; Gallup, J.M.; Shlaer, S.; Spencer, N.

    1977-10-01

    This manual describes the systems for producing histograms and dot plots that were designed for use in connection with the Q general-purpose data-acquisition system. These systems allow for the creation of histograms; the entry, retrieval, and plotting of data in the form of histograms; and the dynamic display of scatter plots as data are acquired. Although the systems are designed for use with Q, they can also be used as a part of other applications. 3 figures

  17. Surveillance of Site A and Plot M, Report for 2008.

    Energy Technology Data Exchange (ETDEWEB)

    Golchert, N. W.

    2009-05-07

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for Calendar Year 2008 are presented. Based on the results of the 1976-1978 radiological characterization of the site, a determination was made that a surveillance program be established. The characterization study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand pumped picnic wells. The current surveillance program began in 1980 and consists of sample collection and analysis of surface and subsurface water. The results of the analyses are used to (1) monitor the migration pathway of hydrogen-3 contaminated water from the burial ground (Plot M) to the hand-pumped picnic wells, (2) establish if other buried radionuclides have migrated, and (3) monitor for the presence of radioactive materials in the environment of the area. Hydrogen-3 in the Red Gate Woods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Hydrogen-3 continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site, using the picnic area, or living in the vicinity.

  18. Surveillance of Site A and Plot M, Report for 2009.

    Energy Technology Data Exchange (ETDEWEB)

    Golchert, N. W.

    2010-04-21

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for Calendar Year 2009 are presented. Based on the results of the 1976-1978 radiological characterization of the site, a determination was made that a surveillance program be established. The characterization study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand-pumped picnic wells. The current surveillance program began in 1980 and consists of sample collection and analysis of surface and subsurface water. The results of the analyses are used to monitor the migration pathway of hydrogen-3 contaminated water from the burial ground (Plot M) to the hand-pumped picnic wells and monitor for the presence of radioactive materials in the environment of the area. Hydrogen-3 in the Red Gate Woods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Hydrogen-3 continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site, using the picnic area, or living in the vicinity.

  19. Surveillance of Site A and Plot M report for 2010.

    Energy Technology Data Exchange (ETDEWEB)

    Golchert, N. W. (ESQ)

    2011-05-31

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for Calendar Year 2010 are presented. Based on the results of the 1976-1978 radiological characterization of the site, a determination was made that a surveillance program be established. The characterization study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand-pumped picnic wells. The current surveillance program began in 1980 and consists of sample collection and analysis of surface and subsurface water. The results of the analyses are used to monitor the migration pathway of hydrogen-3 contaminated water from the burial ground (Plot M) to the hand-pumped picnic wells and monitor for the presence of radioactive materials in the environment of the area. Hydrogen-3 in the Red Gate Woods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Hydrogen-3 continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site, using the picnic area, or living in the vicinity.

  20. Surveillance of site A and plot M, report for 2007.

    Energy Technology Data Exchange (ETDEWEB)

    Golchert, N. W.; ESH/QA Oversight

    2008-03-25

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for Calendar Year 2007 are presented. Based on the results of the 1976-1978 radiological characterization of the site, a determination was made that a surveillance program be established. The characterization study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand pumped picnic wells. The current surveillance program began in 1980 and consists of sample collection and analysis of surface and subsurface water. The results of the analyses are used to: (1) monitor the migration pathway of water from the burial ground (Plot M) to the hand-pumped picnic wells, (2) establish if other buried radionuclides have migrated, and (3) monitor the presence of radioactive materials in the environment of the area. Hydrogen-3 in the Red Gate Woods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Hydrogen-3 continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site, using the picnic area, or living in the vicinity.

  1. Surveillance of Site A and Plot M - Report for 2006.

    Energy Technology Data Exchange (ETDEWEB)

    Golchert, N. W.; ESH/QA Oversight

    2007-05-07

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for Calendar Year 2006 are presented. Based on the results of the 1976-1978 radiological characterization of the site, a determination was made that a surveillance program be established. The characterization study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby hand-pumped picnic wells. The current surveillance program began in 1980 and consists of sample collection and analysis of surface and subsurface water. The results of the analyses are used to (1) monitor the migration pathway of water from the burial ground (PlotM) to the hand pumped picnic wells, (2) establish if buried radionuclides other than hydrogen-3 have migrated, and (3) monitor the presence of radioactive and chemically hazardous materials in the environment of the area. Hydrogen-3 in the Red GateWoods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Hydrogen-3 continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site, using the picnic area, or living in the vicinity.

  2. Surveillance of Site A and Plot M - Report for 2005.

    Energy Technology Data Exchange (ETDEWEB)

    Golchert, N. W.; ESH/QA Oversight

    2006-04-10

    The results of the environmental surveillance program conducted at Site A/Plot M in the Palos Forest Preserve area for Calendar Year 2005 are presented. Based on the results of the 1976-1978 radiological characterization of the site, a determination was made that a surveillance program be established. The characterization study determined that very low levels of hydrogen-3 (as tritiated water) had migrated from the burial ground and were present in two nearby handpumped picnic wells. The current surveillance program began in 1980 and consists of sample collection and analysis of surface and subsurface water. The results of the analyses are used to (1) monitor the migration pathway of water from the burial ground (Plot M) to the handpumped picnic wells, (2) establish if buried radionuclides other than hydrogen-3 have migrated, and (3) monitor the presence of radioactive and chemically hazardous materials in the environment of the area. Hydrogen-3 in the Red Gate Woods picnic wells was still detected this year, but the average and maximum concentrations were significantly less than found earlier. Hydrogen-3 continues to be detected in a number of wells, boreholes, dolomite holes, and a surface stream. Analyses since 1984 have indicated the presence of low levels of strontium-90 in water from a number of boreholes next to Plot M. The results of the surveillance program continue to indicate that the radioactivity remaining at Site A/Plot M does not endanger the health or safety of the public visiting the site, using the picnic area, or living in the vicinity.

  3. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.

    2013-01-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  5. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  6. Recurrence plots of exchange rates of currencies

    OpenAIRE

    Sparavigna, Amelia Carolina

    2014-01-01

    Used to investigate the presence of distinctive recurrent behaviours in natural processes, the recurrence plots can be applied to the analysis of economic data, and, in particular, to the characterization of exchange rates of currencies too. In this paper, we will show that these plots are able to characterize the periods of oscillation and random walk of currencies and enhance their reply to news and events, by means of texture transitions. The examples of recurrence plots given here are obt...

  7. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  8. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  9. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  10. PLOTTAB, Curve and Point Plotting with Error Bars

    International Nuclear Information System (INIS)

    1999-01-01

    1 - Description of program or function: PLOTTAB is designed to plot any combination of continuous curves and/or discrete points (with associated error bars) using user supplied titles and X and Y axis labels and units. If curves are plotted, the first curve may be used as a standard; the data and the ratio of the data to the standard will be plotted. 2 - Method of solution: PLOTTAB: The program has no idea of what data is being plotted and yet by supplying titles, X and Y axis labels and units the user can produce any number of plots with each plot containing almost any combination of curves and points with each plot properly identified. In order to define a continuous curve between tabulated points, this program must know how to interpolate between points. By input the user may specify either the default option of linear x versus linear y interpolation or alternatively log x and/or log Y interpolation. In all cases, regardless of the interpolation specified, the program will always interpolate the data to the plane of the plot (linear or log x and y plane) in order to present the true variation of the data between tabulated points, based on the user specified interpolation law. Tabulated points should be tabulated at a sufficient number of x values to insure that the difference between the specified interpolation and the 'true' variation of a curve between tabulated values is relatively small. 3 - Restrictions on the complexity of the problem: A combination of up to 30 curves and sets of discrete points may appear on each plot. If the user wishes to use this program to compare different sets of data, all of the data must be in the same units

  11. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  12. Worm plot to diagnose fit in quantile regression

    NARCIS (Netherlands)

    Buuren, S. van

    2007-01-01

    The worm plot is a series of detrended Q-Q plots, split by covariate levels. The worm plot is a diagnostic tool for visualizing how well a statistical model fits the data, for finding locations at which the fit can be improved, and for comparing the fit of different models. This paper shows how the

  13. Worm plot to diagnose fit in quantile regression

    NARCIS (Netherlands)

    Buuren, S. van

    2007-01-01

    The worm plot is a series of detrended Q-Q plots, split by covariate levels. The worm plot is a diagnostic tool for visualizing how well a statistical model fits the data, for finding locations at which the fit can be improved, and for comparing the fit of different models. This paper shows how

  14. The Shorth Plot

    NARCIS (Netherlands)

    Einmahl, J.H.J.; Gantner, M.; Sawitzki, G.

    2008-01-01

    The shorth plot is a tool to investigate probability mass concentration. It is a graphical representation of the length of the shorth, the shortest interval covering a certain fraction of the distribution, localized by forcing the intervals considered to contain a given point x. It is easy to

  15. Pore size control of Pitch-based activated carbon fibers by pyrolytic deposition of propylene

    International Nuclear Information System (INIS)

    Xie Jinchuan; Wang Xuhui; Deng Jiyong; Zhang Lixing

    2005-01-01

    In this paper, we attempted to narrow the pore size of Pitch-based activated carbon fiber (Pitch-ACF) by chemical vapor deposition (CVD) of propylene at 700 deg. C. The BET equation was used to estimate the specific surface areas. The micropore volumes were determined using DR equation, t-plot and α s -plot, and mesopore surface areas were determined by t-plot and α s -plot. The pore size distribution (PSD) of micropores and mesopore was investigated by micropore analysis method (MP method) and MK method, respectively. The relation between the graphite-like crystal interlayer distance and pore size was analyzed by X-ray diffraction (XRD). The results showed that the pore size of Pitch-ACF was gradually narrowed with increasing deposition time. The catalytic activation of Ni was attempted when Pitch-ACF was modified simultaneously by pyrolysis of propylene. The results obtained from the analysis of PSD of micropores, mesopores and macropores in Ni-P-ACF by density function theory (DFT) showed that the pore structure and surface chemistry were greatly changed due to introducing nickel catalyst

  16. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  17. Sizes of vanadyl petroporphyrins and asphaltene aggregates in toluene

    Energy Technology Data Exchange (ETDEWEB)

    Dechaine, Greg Paul; Gray, Murray R. [Department of Chemical and Materials Engineering, University of Alberta (Canada)], email: gpd@ualberta.ca

    2010-07-01

    This work focuses on the importance of removing vanadyl porphyrins components from crude oils and the methodology for doing it. The diffusion of asphaltene and vanadium components in diluted toluene was measured using a stirred diaphragm diffusion cell, which was equipped with a number of different cellulosic membranes of different pore size. In-situ UV/visible spectroscopy was used to observe filtrates of the process. The effective diffusivity of asphaltene structures was plotted for different pore sized membranes. It was noticed that asphaltene concentrations increased with increased pore sizes; particularly increasing at pore diameter of 5 nm. Moreover the effects of temperature and mass concentration were also investigated in this study. It was shown that increasing the temperature of the toluene causes the mobility of asphaltene to increase as well. Nevertheless, decreasing the concentration of asphaltene does not affect its mobility. It was shown that toluene samples from different sources showed different mobility.

  18. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  19. Thickening the Plot.

    Science.gov (United States)

    Rose, Brian

    1979-01-01

    Examines the content of daytime serial dramas to determine how the narrative structure promotes a sense of involvement in viewers. Competing plot lines, the lack of a concrete sense of resolution, the pattern of episodes, and the audience's awareness of information kept secret from characters all contribute to audience involvement. (JMF)

  20. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  1. The Wally plot approach to assess the calibration of clinical prediction models.

    Science.gov (United States)

    Blanche, Paul; Gerds, Thomas A; Ekstrøm, Claus T

    2017-12-06

    A prediction model is calibrated if, roughly, for any percentage x we can expect that x subjects out of 100 experience the event among all subjects that have a predicted risk of x%. Typically, the calibration assumption is assessed graphically but in practice it is often challenging to judge whether a "disappointing" calibration plot is the consequence of a departure from the calibration assumption, or alternatively just "bad luck" due to sampling variability. We propose a graphical approach which enables the visualization of how much a calibration plot agrees with the calibration assumption to address this issue. The approach is mainly based on the idea of generating new plots which mimic the available data under the calibration assumption. The method handles the common non-trivial situations in which the data contain censored observations and occurrences of competing events. This is done by building on ideas from constrained non-parametric maximum likelihood estimation methods. Two examples from large cohort data illustrate our proposal. The 'wally' R package is provided to make the methodology easily usable.

  2. Computer Programs for Calculating and Plotting the Stability Characteristics of a Balloon Tethered in a Wind

    Science.gov (United States)

    Bennett, R. M.; Bland, S. R.; Redd, L. T.

    1973-01-01

    Computer programs for calculating the stability characteristics of a balloon tethered in a steady wind are presented. Equilibrium conditions, characteristic roots, and modal ratios are calculated for a range of discrete values of velocity for a fixed tether-line length. Separate programs are used: (1) to calculate longitudinal stability characteristics, (2) to calculate lateral stability characteristics, (3) to plot the characteristic roots versus velocity, (4) to plot the characteristic roots in root-locus form, (5) to plot the longitudinal modes of motion, and (6) to plot the lateral modes for motion. The basic equations, program listings, and the input and output data for sample cases are presented, with a brief discussion of the overall operation and limitations. The programs are based on a linearized, stability-derivative type of analysis, including balloon aerodynamics, apparent mass, buoyancy effects, and static forces which result from the tether line.

  3. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  4. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  5. Estimation of Missing Observations in Two-Level Split-Plot Designs

    DEFF Research Database (Denmark)

    Almimi, Ashraf A.; Kulahci, Murat; Montgomery, Douglas C.

    2008-01-01

    Inserting estimates for the missing observations from split-plot designs restores their balanced or orthogonal structure and alleviates the difficulties in the statistical analysis. In this article, we extend a method due to Draper and Stoneman to estimate the missing observations from unreplicated...... two-level factorial and fractional factorial split-plot (FSP and FFSP) designs. The missing observations, which can either be from the same whole plot, from different whole plots, or comprise entire whole plots, are estimated by equating to zero a number of specific contrast columns equal...... to the number of the missing observations. These estimates are inserted into the design table and the estimates for the remaining effects (or alias chains of effects as the case with FFSP designs) are plotted on two half-normal plots: one for the whole-plot effects and the other for the subplot effects...

  6. Plot Description (PD)

    Science.gov (United States)

    Robert E. Keane

    2006-01-01

    The Plot Description (PD) form is used to describe general characteristics of the FIREMON macroplot to provide ecological context for data analyses. The PD data characterize the topographical setting, geographic reference point, general plant composition and cover, ground cover, fuels, and soils information. This method provides the general ecological data that can be...

  7. iCanPlot: visual exploration of high-throughput omics data using interactive Canvas plotting.

    Directory of Open Access Journals (Sweden)

    Amit U Sinha

    Full Text Available Increasing use of high throughput genomic scale assays requires effective visualization and analysis techniques to facilitate data interpretation. Moreover, existing tools often require programming skills, which discourages bench scientists from examining their own data. We have created iCanPlot, a compelling platform for visual data exploration based on the latest technologies. Using the recently adopted HTML5 Canvas element, we have developed a highly interactive tool to visualize tabular data and identify interesting patterns in an intuitive fashion without the need of any specialized computing skills. A module for geneset overlap analysis has been implemented on the Google App Engine platform: when the user selects a region of interest in the plot, the genes in the region are analyzed on the fly. The visualization and analysis are amalgamated for a seamless experience. Further, users can easily upload their data for analysis--which also makes it simple to share the analysis with collaborators. We illustrate the power of iCanPlot by showing an example of how it can be used to interpret histone modifications in the context of gene expression.

  8. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  9. Modeling Short-Range Soil Variability and its Potential Use in Variable-Rate Treatment of Experimental Plots

    Directory of Open Access Journals (Sweden)

    A Moameni

    2011-02-01

    Full Text Available Abstract In Iran, the experimental plots under fertilizer trials are managed in such a way that the whole plot area uniformly receives agricultural inputs. This could lead to biased research results and hence to suppressing of the efforts made by the researchers. This research was conducted in a selected site belonging to the Gonbad Agricultural Research Station, located in the semiarid region, northeastern Iran. The aim was to characterize the short-range spatial variability of the inherent and management-depended soil properties and to determine if this variation is large and can be managed at practical scales. The soils were sampled using a grid 55 m apart. In total, 100 composite soil samples were collected from topsoil (0-30 cm and were analyzed for calcium carbonate equivalent, organic carbon, clay, available phosphorus, available potassium, iron, copper, zinc and manganese. Descriptive statistics were applied to check data trends. Geostatistical analysis was applied to variography, model fitting and contour mapping. Sampling at 55 m made it possible to split the area of the selected experimental plot into relatively uniform areas that allow application of agricultural inputs with variable rates. Keywords: Short-range soil variability, Within-field soil variability, Interpolation, Precision agriculture, Geostatistics

  10. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Tibetan tectonics from 40Ar/39Ar analysis of a single K-feldspar sample

    International Nuclear Information System (INIS)

    Richter, F.M.; Lovera, O.M.; Harrison, T.M.; Copeland, P.

    1991-01-01

    40 Ar/ 39 Ar data on an alkali feldspar sample from the Quxu pluton, Gangdese batholith, southern Tibet, allow a detailed assessment of unroofing and uplift history between 35 and 18 Ma. The 39 Ar Arrhenius plot for this sample shows departures from a linear relationship between the effective diffusion parameter, log(D/r 2 ), and reciprocal temperature, which we interpret to be the result of a distribution of distinct diffusion-domain sizes. We use an alternative way of plotting the Arrhenius data that exhibits domain size versus cumulative % 39 Ar released during step heating. The 40 Ar/ 39 Ar age spectrum of the sample has features, such as local age plateaus, that are most easily explained in terms of the distinctive closure age of particular domains. The fact that the same distribution of diffusion-domain sizes explains both the Arrhenius data and the age spectrum is an indication that the diffusion properties operating in the laboratory are those of the sample while it was in its natural environment. Modelling of the age spectrum with a distribution of domain sizes results in the recovery of a continuous cooling-history segment rather than a single time-temperature datum. We demonstrate the robustness of the cooling-curve determination by showing the large misfits to the age spectrum that arise from relatively small changes in the cooling history. The best-fit cooling curve for the Quxu sample shows a decreasing rate of cooling in the time interval 35-18 Ma, followed by a very rapid cooling beginning at about 18 Ma. We have used a thermal model for the conductive cooling of an unroofing pluton to estimate the rate of unroofing required to explain the Quxu cooling curve, and find that in the 35-20 Ma time interval, the primary control of the thermal evolution is the conductive loss of magmatic heat with little or no unroofing (unroofing rates of approximately 0.05 mm/yr) followed by a brief period (<5 Ma) of very rapid unroofing with rates of order 2 mm

  12. Split-plot designs for multistage experimentation

    DEFF Research Database (Denmark)

    Kulahci, Murat; Tyssedal, John

    2016-01-01

    at the same time will be more efficient. However, there have been only a few attempts in the literature to provide an adequate and easy-to-use approach for this problem. In this paper, we present a novel methodology for constructing two-level split-plot and multistage experiments. The methodology is based...... be accommodated in each stage. Furthermore, split-plot designs for multistage experiments with good projective properties are also provided....

  13. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  14. The isometric log-ratio (ilr)-ion plot: A proposed alternative to the Piper diagram

    Science.gov (United States)

    Shelton, Jenna L.; Engle, Mark A.; Buccianti, Antonella; Blondes, Madalyn S.

    2018-01-01

    The Piper diagram has been a staple for the analysis of water chemistry data since its introduction in 1944. It was conceived to be a method for water classification, determination of potential water mixing between end-members, and to aid in the identification of chemical reactions controlling a sample set. This study uses the information gleaned over the years since the release of the Piper diagram and proposes an alternative to it, capturing the strengths of the original diagram while adding new ideas to increase its robustness. The new method uses compositional data analysis to create 4 isometric log-ratio coordinates for the 6 major chemical species analyzed in the Piper diagram and transforms the data to a 4-field bi-plot, the ilr-ion plot. This ilr-ion plot conveys all of the information in the Piper diagram (water mixing, water types, and chemical reactions) while also visualizing additional data, the ability to examine Ca2+/Mg2+ versus Cl-/SO42−. The Piper and the ilr-ion plot were also compared using multiple synthetic and real datasets in order to illustrate the caveats and the advantages of using either diagram to analyze water chemistry data. Although there are challenges with using the ilr-ion plot (e.g., missing or zero values zeros in the dataset must be imputed by positive real numbers), it appears that the use of compositional data analysis coupled with the ilr-ion plot provides a more in-depth and complete analysis of water quality data compared to the original Piper diagram.

  15. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  16. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  17. Comparação de tamanhos e distâncias de subparcelas aplicadas em processo de amostragem por conglomerado Comparison of sizes and distances of subplots applied in the process of cluster sampling

    Directory of Open Access Journals (Sweden)

    Oberdan Müller M. das Flores

    2012-09-01

    Full Text Available Realizou-se um estudo com unidades amostrais conglomeradas de estrutura cruzada usando banco de dados de inventário florestal 100% de uma Unidade de produção Anual (UPA-08, localizada na fazenda Martins no alto Pacajá, Município de Portel, Estado do Pará, para estudar os efeitos da variação entre e dentro dos conglomerados usando como referência três variáveis resposta: i VT = Volume total de madeira das 77 espécies florestais encontradas na área de estudo; ii VO = Volume Total das 10 espécies florestais de maior ocorrência e volume na área de estudo; e iii VC = Volume Total das 10 espécies florestais mais comercializadas pela empresa Cikel, tendo como objetivo determinar o tamanho e a distância ideal de subparcelas a partir do Método da Curvatura Máxima, uma vez definido o tamanho, sua área é fixada e então são estudadas cinco diferentes valores para determinar a distancia ideal do centro do conglomerado até as subparcelas e entre as mesmas. Os resultados permitiram inferir que o tamanho de subparcela 0,10 ha, a distância de 50 m entre as subparcelas, e a estrutura cruzada com 8 (subparcelas são eficientes para estimar os parâmetros estudados.We conducted a study with sample units of conglomerate structure using cross-database of forest inventory 100% of Annual Production Unit (APU-08, located at the Top Pacajá Martins farm, municipality of Portel-PA to study the effects of variation between and within clusters with reference to three response variables they were: i VT = Total volume of wood of 77 tree species found in the study area, ii VO = Total Volume of the 10 most frequent tree species and volume in the study area and, iii Vt = Total volume of 10 forest species Cikel more commercialized by the company we aimed to determine the ideal size and distance of plots from the Method of Maximum Curvature, once defined the size, its area was fixed and then there was tested five different values to determine the ideal

  18. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  19. Mammals of medium and large size in Santa Rita do Sapucaí, Minas Gerais, southeastern Brazil

    Directory of Open Access Journals (Sweden)

    Eduardo, A. A.

    2009-01-01

    Full Text Available The diversity of Brazilian vertebrates is regarded among the highest in the world. However, the biologicaldiversity is still mostly unknown and a good part of it is seriously threatened by human activities. This study aimed toinventory the medium and large size mammals present in the Reserva Biológica de Santa Rita do Sapucaí, an Atlanticforest reserve located in Santa Rita do Sapucaí, southeastern Brazil. Sand-plots, photographic traps and searches foranimal tracks on pre-existent trails in the area, were carried out once every two months between May 2006 andFebruary 2007. The sand-plots and tracks were inspected during five consecutive days per sampling. We obtained 108records of 15 species, mostly of carnivorans. Two confirmed species are threatened with extinction in Brazil (Callithrixaurita and Leopardus pardalis. The results suggest that the sampled reserve has high species richness and plays animportant role in conservation of mammals in this landscape, including species threatened with extinction.

  20. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  1. Variability, plot size and border effect in lettuce trials in protected environment

    Directory of Open Access Journals (Sweden)

    Daniel Santos

    2018-03-01

    Full Text Available ABSTRACT The variability within rows of cultivation may reduce the accuracy of experiments conducted in a complete randomized block design if the rows are considered as blocks, however, little is known about this variability in protected environments. Thus, our aim was to study the variability of the fresh mass in lettuce shoot, growing in protected environment, and to verify the border effect and size of the experimental unit in minimizing the productive variability. Data from two uniformity trials carried out in a greenhouse in autumn and spring growing seasons were used. In the statistical analyses, it was considered the existence of parallel cultivation rows the lateral openings of the greenhouse and of columns perpendicular to these openings. Different scenarios were simulated by excluding rows and columns to generate several borders arrangements and also to use different sizes of the experimental unit. For each scenario, homogeneity test of variances between remaining rows and columns was performed, and it was calculated the variance and coefficient of variation. There is variability among rows in trials with lettuce in plastic greenhouses and the border use does not bring benefits in terms of reduction of the coefficient of variation or minimizing the cases of heterogeneous variances among rows. In experiments with lettuce in a plastic greenhouse, the use of an experimental unit size greater than or equal to two plants provides homogeneity of variances among rows and columns and, therefore, allows the use of a completely randomized design.

  2. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  3. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  4. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    Directory of Open Access Journals (Sweden)

    Christopher Ryan Penton

    2016-06-01

    Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.

  5. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  7. SUPERIMPOSED MESH PLOTTING IN MCNP

    Energy Technology Data Exchange (ETDEWEB)

    J. HENDRICKS

    2001-02-01

    The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.

  8. Richness and Abundance of Lianas with Different Diameter Classes in Permanent Plots in the Amazon in Mato Grosso

    Directory of Open Access Journals (Sweden)

    D. G. Ferraz

    2013-11-01

    Full Text Available Lianas are an important component of the structure and diversity of tropical forests and the Amazon biome is one of few natural protected areas that still support the highest level of biodiversity in the world. Generally in disturbed forests high densities of lianas are found than mature forests. The aim of this study is to investigate the richness among families and lianas abundance with different diameter classes in permanent plots in the Amazon of Mato Grosso. To the survey were placed 8 plots of 40 x 250 in a forest fragment that has been management for 30 years, where we sampled lianas species with diameter breast height (DBH ≥ 1 cm. There were sampled 3970 stems in the permanent plots, and the two most abundant were 2 and 6 with 594 and 573 individuals respectively. The richest families were Sapindaceae, Dilleniaceae, Menispermaceae and Fabaceae. These results confirm the hypothesis that disturbed areas have more density of lianas with small DBH.

  9. SplicePlot: a utility for visualizing splicing quantitative trait loci.

    Science.gov (United States)

    Wu, Eric; Nance, Tracy; Montgomery, Stephen B

    2014-04-01

    RNA sequencing has provided unprecedented resolution of alternative splicing and splicing quantitative trait loci (sQTL). However, there are few tools available for visualizing the genotype-dependent effects of splicing at a population level. SplicePlot is a simple command line utility that produces intuitive visualization of sQTLs and their effects. SplicePlot takes mapped RNA sequencing reads in BAM format and genotype data in VCF format as input and outputs publication-quality Sashimi plots, hive plots and structure plots, enabling better investigation and understanding of the role of genetics on alternative splicing and transcript structure. Source code and detailed documentation are available at http://montgomerylab.stanford.edu/spliceplot/index.html under Resources and at Github. SplicePlot is implemented in Python and is supported on Linux and Mac OS. A VirtualBox virtual machine running Ubuntu with SplicePlot already installed is also available.

  10. Reaction Order Ambiguity in Integrated Rate Plots

    Science.gov (United States)

    Lee, Joe

    2008-01-01

    Integrated rate plots are frequently used in reaction kinetics to determine orders of reactions. It is often emphasised, when using this methodology in practice, that it is necessary to monitor the reaction to a substantial fraction of completion for these plots to yield unambiguous orders. The present article gives a theoretical and statistical…

  11. Modelos de regressão com platô na estimativa do tamanho de parcelas em experimento de conservação in vitro de maracujazeiro Using of regression plateau models in estimation of plot sizes for experiments with passion fruit

    Directory of Open Access Journals (Sweden)

    Ana Patricia Bastos Peixoto

    2011-11-01

    Full Text Available A determinação do tamanho de parcela é uma prática pertinente ao planejamento experimental e sua caracterização otimizada, em conjunto com o controle do material experimental, permite a obtenção de resultados com maior precisão e qualidade. Neste trabalho, determinou-se o tamanho de parcelas para experimentos de conservação in vitro de maracujazeiro, em dez ensaios de uniformidade com a espécie Passiflora Giberti N. E. Brown, utilizando-se o modelo de regressão linear segmentado com platô e o modelo de regressão quadrática segmentado com platô, que utilizam a técnica de resposta com platô a modelos que possuam mínimo. Os ensaios de uniformidade foram oriundos de experimento conduzido no delineamento inteiramente casualizado, com 20 unidades básicas (ub, e os tratamentos dispostos em esquema fatorial com três concentrações de sacarose, três concentrações de sorbitol e uma testemunha. A coleta dos dados foi realizada aos 60 dias após a incubação, medindo-se o comprimento das brotações. Os tamanhos de parcelas variaram com o método utilizado, encontrando-se parcelas formadas por seis explantes pelo modelo da regressão linear segmentado com platô e de dez explantes pelo modelo de regressão quadrática segmentado com platô.The determination of the plot size is a practical question to the experimental design, and its characterization in an optimized way allows obtaining larger precision and quality results. This research aimed to determine the plot size in experiments in vitro that seek the passion fruit plant conservation in ten uniformity assays with the species Passiflora Giberti N. E. Brown. The tests of uniformity came from an experiment conducted in a completely randomized design with treatments in a factorial design with three concentrations of sucrose, three concentrations of sorbitol and a control. Each treatment was considered as a uniformity assay, with 20 basic units. The evaluations of the experiments

  12. Sampling procedures for inventory of commercial volume tree species in Amazon Forest.

    Science.gov (United States)

    Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R

    2017-01-01

    The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.

  13. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  14. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  15. Contemplation on Plot and Personification in Tehran Makhuf

    OpenAIRE

    Mirjalil Akrami; Mohammad Pashaei

    2013-01-01

    Abstract Story is one of the important and influential literary genres that deserves to research on its structure and content .By study and analysis of a story, the reader could comprehend the textual messages and change his attitude toward life and different issues .After defining of the social novel, the author tries to analyze plot and personification in novel of “Tehran-e- Makhuf” and respond to the questions on the method employed for plot and personification and analyze it from plot...

  16. PET kinetic analysis --pitfalls and a solution for the Logan plot.

    Science.gov (United States)

    Kimura, Yuichi; Naganawa, Mika; Shidahara, Miho; Ikoma, Yoko; Watabe, Hiroshi

    2007-01-01

    The Logan plot is a widely used algorithm for the quantitative analysis of neuroreceptors using PET because it is easy to use and simple to implement. The Logan plot is also suitable for receptor imaging because its algorithm is fast. However, use of the Logan plot, and interpretation of the formed receptor images should be regarded with caution, because noise in PET data causes bias in the Logan plot estimates. In this paper, we describe the basic concept of the Logan plot in detail and introduce three algorithms for the Logan plot. By comparing these algorithms, we demonstrate the pitfalls of the Logan plot and discuss the solution.

  17. SCALPLO, Plotting of Flux Output from SCALE Program

    International Nuclear Information System (INIS)

    Hersman, A.; De Leege, P.F.A.; Hoogenboom, J.E.

    1993-01-01

    1 - Description of program or function: SCALPLO is a plot program, designed to plot flux, power and spectrum information. Data exchange between SCALE modules and SCALPLO is via CCCC-interface files. As not all modules can produce these files, there are special routines supplied with SCALPLO that can produce CCCC-like files. These routines can be included in the code and for XSDRPM, CITATION, ANISN and DOT, the place to include these routines is supplied. 2 - Method of solution: SCALPLO consists of two sections. Firstly the pre-processor, which selects and reads the required data. Secondly the plot section which produces the plot on the selected output device. 3 - Restrictions on the complexity of the problem: SCALPLO requires DISSPLA version 11.0 or higher. The choice of output device depends on the devices installed

  18. A drawback and an improvement of the classical Weibull probability plot

    International Nuclear Information System (INIS)

    Jiang, R.

    2014-01-01

    The classical Weibull Probability Paper (WPP) plot has been widely used to identify a model for fitting a given dataset. It is based on a match between the WPP plots of the model and data in shape. This paper carries out an analysis for the Weibull transformations that create the WPP plot and shows that the shape of the WPP plot of the data randomly generated from a distribution model can be significantly different from the shape of the WPP plot of the model due to the high non-linearity of the Weibull transformations. As such, choosing model based on the shape of the WPP plot of data can be unreliable. A cdf-based weighted least squares method is proposed to improve the parameter estimation accuracy; and an improved WPP plot is suggested to avoid the drawback of the classical WPP plot. The appropriateness and usefulness of the proposed estimation method and probability plot are illustrated by simulation and real-world examples

  19. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  20. Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources.

    Science.gov (United States)

    Rambo, Robert P

    2017-01-01

    The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.

  1. Development of TRatioPlot in ROOT

    CERN Document Server

    Gessinger-Befurt, Paul

    2016-01-01

    The ROOT data analysis and visualization framework is a software package which is widely used in physics, especially in high energy physics. A common visualization which has so far been lacking a direct implementation is the ratio plot, as well as a few similar types of plots. The scope and goal of the summer student project at CERN was to implement a class in ROOT itself, that can take care of the most common types of calculations, and produces high quality visuals.

  2. Digital data collection in forest dynamics plots

    Science.gov (United States)

    Faith Inman-Narahari; Christian Giardina; Rebecca Ostertag; Susan Cordell; Lawren Sack

    2010-01-01

    Summary 1. Computers are widely used in all aspects of research but their application to in-field data collection for forest plots has rarely been evaluated. 2. We developed digital data collection methods using ESRI mapping software and ruggedized field computers to map and measure ~30 000 trees in two 4-ha forest dynamics plots in wet and dry...

  3. The study of the sample size on the transverse magnetoresistance of bismuth nanowires

    International Nuclear Information System (INIS)

    Zare, M.; Layeghnejad, R.; Sadeghi, E.

    2012-01-01

    The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.

  4. Prototyping chips in minutes: Direct Laser Plotting (DLP) of functional microfluidic structures

    KAUST Repository

    Wang, Limu

    2013-10-10

    We report a fast and simple prototyping method to fabricate polymer-based microfluidic chips using Direct Laser Plotting (DLP) technique, by which various functional micro-structures can be realized within minutes, in a mask-free and out-of-cleanroom fashion. A 2D Computer-Aid-Design (CAD) software was employed to layout the required micro-structures and micro-channels, a CO2 laser plotter was then used to construct the microstructures. The desired patterns can be plotted directly on PDMS substrates and bio-compatible polymer films by manipulating the strength and density of laser pulses. With the DLP technique, chip-embedded micro-electrodes, micro-mixers and 3D microfluidic chips with 5 layers, which normally require several days of work in a cleanroom facility, can be fabricated in minutes in common laboratory. This novel method can produce microfluidic channels with average feature size of 100 μm, while feature size of 50 μm or smaller is achievable by making use of the interference effect from laser impulsion. In this report, we present the optimized parameters for successful fabrication of 3D microchannels, micro-mixers and microfluidic chips for protein concentration measurements (Bovine Serum Albumine (BSA) test), and a novel procedure to pattern flexible embedding electrodes on PDMS-based microfluidic chips. DLP offers a convenient and low cost alternative to conventional microfluidic channel fabrication technique which relies on complicated and hazardous soft lithography process.

  5. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  6. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  7. True versus perturbed forest inventory plot locations for modeling: a simulation study

    Science.gov (United States)

    John W. Coulston; Kurt H. Riitters; Ronald E. McRoberts; William D. Smith

    2006-01-01

    USDA Forest Service Forest Inventory and Analysis plot information is widely used for timber inventories, forest health assessments, and environmental risk analyses. With few exceptions, true plot locations are not revealed; the plot coordinates are manipulated to obscure the location of field plots and thereby preserve plot integrity. The influence of perturbed plot...

  8. Dalitz plot analysis of B-0 -> (D)over-bar(0)pi(+)pi(-) decays

    NARCIS (Netherlands)

    Aaij, R.; Adeva, B.; Adinolfi, M.; Affolder, A.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Cartelle, P. Alvarez; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Andreassen, R.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Gutierrez, O. Aquines; Archilli, F.; d'Argent, P.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Batozskaya, V.; Battista, V.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Bel, L. J.; Belogurov, S.; Onderwater, C. J. G.; Pellegrino, A.; Tolk, S.

    2015-01-01

    The resonant substructures of B-0 --> (D) over bar (0)pi(+)pi(-) decays are studied with the Dalitz plot technique. In this study a data sample corresponding to an integrated luminosity of 3.0 fb(-1) of pp collisions collected by the LHCb detector is used. The branching fraction of the B-0 --> (D)

  9. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  10. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    Science.gov (United States)

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  11. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    Science.gov (United States)

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  12. Effects of insecticide exposure on movement and population size estimates of predatory ground beetles (Coleoptera: Carabidae).

    Science.gov (United States)

    Prasifka, Jarrad R; Lopez, Miriam D; Hellmich, Richard L; Prasifka, Patricia L

    2008-01-01

    Estimates of arthropod population size may paradoxically increase following insecticide applications. Research with ground beetles (Coleoptera: Carabidae) suggests that such unusual results reflect increased arthropod movement and capture in traps rather than real changes in population size. However, it is unclear whether direct (hyperactivity) or indirect (prey-mediated) mechanisms produce increased movement. Video tracking of Scarites quadriceps Chaudior indicated that brief exposure to lambda-cyhalothrin or tefluthrin increased total distance moved, maximum velocity and percentage of time moving. Repeated measurements on individual beetles indicated that movement decreased 240 min after initial lambda-cyhalothrin exposure, but increased again following a second exposure, suggesting hyperactivity could lead to increased trap captures in the field. Two field experiments in which ground beetles were collected after lambda-cyhalothrin or permethrin application attempted to detect increases in population size estimates as a result of hyperactivity. Field trials used mark-release-recapture methods in small plots and natural carabid populations in larger plots, but found no significant short-term (<6 day) increases in beetle trap captures. The disagreement between laboratory and field results suggests mechanisms other than hyperactivity may better explain unusual changes in population size estimates. When traps are used as a primary sampling tool, unexpected population-level effects should be interpreted carefully or with additional data less influenced by arthropod activity.

  13. Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.

    1994-01-01

    The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report

  14. Macroecology of Australian Tall Eucalypt Forests: Baseline Data from a Continental-Scale Permanent Plot Network

    Science.gov (United States)

    Wood, Sam W.; Prior, Lynda D.; Stephens, Helen C.; Bowman, David M. J. S.

    2015-01-01

    Tracking the response of forest ecosystems to climate change demands large (≥1 ha) monitoring plots that are repeatedly measured over long time frames and arranged across macro-ecological gradients. Continental scale networks of permanent forest plots have identified links between climate and carbon fluxes by monitoring trends in tree growth, mortality and recruitment. The relationship between tree growth and climate in Australia has been recently articulated through analysis of data from smaller forest plots, but conclusions were limited by (a) absence of data on recruitment and mortality, (b) exclusion of non-eucalypt species, and (c) lack of knowledge of stand age or disturbance histories. To remedy these gaps we established the Ausplots Forest Monitoring Network: a continental scale network of 48 1 ha permanent plots in highly productive tall eucalypt forests in the mature growth stage. These plots are distributed across cool temperate, Mediterranean, subtropical and tropical climates (mean annual precipitation 850 to 1900 mm per year; mean annual temperature 6 to 21°C). Aboveground carbon stocks (AGC) in these forests are dominated by eucalypts (90% of AGC) whilst non-eucalypts in the understorey dominated species diversity and tree abundance (84% of species; 60% of stems). Aboveground carbon stocks were negatively related to mean annual temperature, with forests at the warm end of the temperature range storing approximately half the amount of carbon as forests at the cool end of the temperature range. This may reflect thermal constraints on tree growth detected through other plot networks and physiological studies. Through common protocols and careful sampling design, the Ausplots Forest Monitoring Network will facilitate the integration of tall eucalypt forests into established global forest monitoring initiatives. In the context of projections of rapidly warming and drying climates in Australia, this plot network will enable detection of links between

  15. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  16. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  17. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  18. MMS control system analysis using automated root-locus plot generation

    International Nuclear Information System (INIS)

    Hefler, J.W.

    1987-01-01

    Use of the Modular Modeling System (MMS) for control systems improvement has been impeded by the need to plot eigenvalues manually. This problem has been solved by an automatic eigenvalue plotting routine. A practical procedure for control systems analysis based upon automatically generated root-locus plots has been developed using the Advanced Continuous Simulation Language (ACSL)-based version of the Modular Modeling System. Examples are given of typical ACSL run-time statements. Actual root-locus and time history plots are shown for simple models (4 state variables). More complex models are discussed. The plots show the control systems response before and after the determination of tuning parameters using the methods described

  19. Air Data - Tile Plot

    Science.gov (United States)

    This tool plots daily AQI values for a specific location and time period. Each square or “tile” represents one day of the year and is color-coded based on the AQI level for that day. The legend tallies the number of days in each AQI category.

  20. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  1. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  2. Split-Plot Designs with Mirror Image Pairs as Subplots

    DEFF Research Database (Denmark)

    Tyssedal, John; Kulahci, Murat; Bisgaard, Soren

    2011-01-01

    In this article we investigate two-level split-plot designs where the sub-plots consist of only two mirror image trials. Assuming third and higher order interactions negligible, we show that these designs divide the estimated effects into two orthogonal sub-spaces, separating sub-plot main effects...... appealing with effects of major interest free from full aliasing assuming that 3rd and higher order interactions are negligible....

  3. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  4. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  6. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  8. PLOT3D Export Tool for Tecplot

    Science.gov (United States)

    Alter, Stephen

    2010-01-01

    The PLOT3D export tool for Tecplot solves the problem of modified data being impossible to output for use by another computational science solver. The PLOT3D Exporter add-on enables the use of the most commonly available visualization tools to engineers for output of a standard format. The exportation of PLOT3D data from Tecplot has far reaching effects because it allows for grid and solution manipulation within a graphical user interface (GUI) that is easily customized with macro language-based and user-developed GUIs. The add-on also enables the use of Tecplot as an interpolation tool for solution conversion between different grids of different types. This one add-on enhances the functionality of Tecplot so significantly, it offers the ability to incorporate Tecplot into a general suite of tools for computational science applications as a 3D graphics engine for visualization of all data. Within the PLOT3D Export Add-on are several functions that enhance the operations and effectiveness of the add-on. Unlike Tecplot output functions, the PLOT3D Export Add-on enables the use of the zone selection dialog in Tecplot to choose which zones are to be written by offering three distinct options - output of active, inactive, or all zones (grid blocks). As the user modifies the zones to output with the zone selection dialog, the zones to be written are similarly updated. This enables the use of Tecplot to create multiple configurations of a geometry being analyzed. For example, if an aircraft is loaded with multiple deflections of flaps, by activating and deactivating different zones for a specific flap setting, new specific configurations of that aircraft can be easily generated by only writing out specific zones. Thus, if ten flap settings are loaded into Tecplot, the PLOT3D Export software can output ten different configurations, one for each flap setting.

  9. Construction of an experimental plot seeder of wheat planting and compare it by imported one

    Directory of Open Access Journals (Sweden)

    I Eskandari

    2016-09-01

    Full Text Available Introduction Researchers frequently include multiple cultivars and fertility levels in field experiments. Therefore, the experiments sowing operation must represent a considerable saving in time and labor, compared to hand sowing. Greater flexibility in experimental design and setup could be achieved by equipment that enables quick changes in the cultivar and fertilizer rates from one plot to the next. A satisfactory seed drill must distribute a given quantity of seed evenly over a predetermined length of coulter row, the coulters must be spaced at exact intervals and depth of sowing must be uniform. In a self-propelled type of plot seeder, no coulter should run in a wheel track as the compaction of the soil can cause observable differences in vigor between plants in such a row and those in un-compacted rows. The machine should sow in succession from a try in which a series of seed pocket separated clearly and must be put into distributer funnel by an assistant operator. The length of gap being varied according to the nature and purpose of the plot. The objectives of this experiment were 1- to design and construct a local self-propelled plot seeder and 2- To compare it with the imported (Wintersteiger plot seeder in cereal breeding programs. Materials and Methods A small-plot seeder was designed and constructed to meet this objective. The unit consists of the following basic components: a toolbar for pulling a set of six blade coulter, an air compressor for lifting and putting down the openers and metering transmission drive wheel, an operators chair and work rack, one belt seed distribution. A cone-celled and rotor seed distributor is used for seed distribution to the openers. The cone system is connected to the gearbox and allows for great flexibility in changing cultivars, crop species, and plot length. This is driven by the separate drive wheel. The cone-celled distributor sows all the seed of the sample in making one complete turn. The

  10. PETRO.CALC.PLOT, Microsoft Excel macros to aid petrologic interpretation

    Science.gov (United States)

    Sidder, G.B.

    1994-01-01

    PETRO.CALC.PLOT is a package of macros which normalizes whole-rock oxide data to 100%, calculates the cation percentages and molecular proportions used for normative mineral calculations, computes the apices for ternary diagrams, determines sums and ratios of specific elements of petrologic interest, and plots 33 X-Y graphs and five ternary diagrams. PETRO.CALC.PLOT also may be used to create other diagrams as desired by the user. The macros run in Microsoft Excel 3.0 and 4.0 for Macintosh computers and in Microsoft Excel 3.0 and 4.0 for Windows. Macros provided in PETRO.CALC.PLOT minimize repetition and time required to recalculate and plot whole-rock oxide data for petrologic analysis. ?? 1994.

  11. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  12. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  13. Plotting of bathythermograph transect data on a printer

    Science.gov (United States)

    Reynolds, James B.; McLain, Douglas R.

    1971-01-01

    A program for plotting bathythermograph transect data on a computer (IBM 1130) printer is available from the Great Lakes Fishery Laboratory. Temperature values are printed in positions proportional to their depths and distances from shore. Contour lines are drawn manually through the plotted points.

  14. Privatization of Land Plot Under Integral Real Estate Complex

    Directory of Open Access Journals (Sweden)

    Maruchek A. A.

    2014-10-01

    Full Text Available The article deals with the questions concerning the privatization of a land plot under integral real estate complex. The authors come to conclusion that a number of legislation norms relating to privatization of a land plot do not take into account the construction of an integral real estate complex that could cause some problems in the realization of the right to privatization of the land plot

  15. Plotting and Analyzing Data Trends in Ternary Diagrams Made Easy

    Science.gov (United States)

    John, Cédric M.

    2004-04-01

    Ternary plots are used in many fields of science to characterize a system based on three components. Triangular plotting is thus useful to a broad audience in the Earth sciences and beyond. Unfortunately, it is typically the most expensive commercial software packages that offer the option to plot data in ternary diagrams, and they lack features that are paramount to the geosciences, such as the ability to plot data directly into a standardized diagram and the possibility to analyze temporal and stratigraphic trends within this diagram. To address these issues, δPlot was developed with a strong emphasis on ease of use, community orientation, and availability free of charges. This ``freeware'' supports a fully graphical user interface where data can be imported as text files, or by copying and pasting. A plot is automatically generated, and any standard diagram can be selected for plotting in the background using a simple pull-down menu. Standard diagrams are stored in an external database of PDF files that currently holds some 30 diagrams that deal with different fields of the Earth sciences. Using any drawing software supporting PDF, one can easily produce new standard diagrams to be used with δPlot by simply adding them to the library folder. An independent column of values, commonly stratigraphic depths or ages, can be used to sort the data sets.

  16. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  17. Slope Stability of Geosynthetic Clay Liner Test Plots

    Science.gov (United States)

    Fourteen full-scale field test plots containing five types of geosynthetic clay liners (GCLs) were constructed on 2H:IV and 3H:IV slopes for the purpose of assessing slope stability. The test plots were designed to simulate typical final cover systems for landfill. Slides occurr...

  18. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    Science.gov (United States)

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  19. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  20. Chain Plot: A Tool for Exploiting Bivariate Temporal Structures

    OpenAIRE

    Taylor, CC; Zempeni, A

    2004-01-01

    In this paper we present a graphical tool useful for visualizing the cyclic behaviour of bivariate time series. We investigate its properties and link it to the asymmetry of the two variables concerned. We also suggest adding approximate confidence bounds to the points on the plot and investigate the effect of lagging to the chain plot. We conclude our paper by some standard Fourier analysis, relating and comparing this to the chain plot.

  1. 9 CFR 108.2 - Plot plans, blueprints, and legends required.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Plot plans, blueprints, and legends... REQUIREMENTS FOR LICENSED ESTABLISHMENTS § 108.2 Plot plans, blueprints, and legends required. Each applicant for an establishment license shall prepare a plot plan showing all buildings for each particular land...

  2. Application of Tryptophan Fluorescence Bandwidth-Maximum Plot in Analysis of Monoclonal Antibody Structure.

    Science.gov (United States)

    Huang, Cheng-Yen; Hsieh, Ming-Ching; Zhou, Qinwei

    2017-04-01

    Monoclonal antibodies have become the fastest growing protein therapeutics in recent years. The stability and heterogeneity pertaining to its physical and chemical structures remain a big challenge. Tryptophan fluorescence has been proven to be a versatile tool to monitor protein tertiary structure. By modeling the tryptophan fluorescence emission envelope with log-normal distribution curves, the quantitative measure can be exercised for the routine characterization of monoclonal antibody overall tertiary structure. Furthermore, the log-normal deconvolution results can be presented as a two-dimensional plot with tryptophan emission bandwidth vs. emission maximum to enhance the resolution when comparing samples or as a function of applied perturbations. We demonstrate this by studying four different monoclonal antibodies, which show the distinction on emission bandwidth-maximum plot despite their similarity in overall amino acid sequences and tertiary structures. This strategy is also used to demonstrate the tertiary structure comparability between different lots manufactured for one of the monoclonal antibodies (mAb2). In addition, in the unfolding transition studies of mAb2 as a function of guanidine hydrochloride concentration, the evolution of the tertiary structure can be clearly traced in the emission bandwidth-maximum plot.

  3. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  4. User manual for two simple postscript output FORTRAN plotting routines

    Science.gov (United States)

    Nguyen, T. X.

    1991-01-01

    Graphics is one of the important tools in engineering analysis and design. However, plotting routines that generate output on high quality laser printers normally come in graphics packages, which tend to be expensive and system dependent. These factors become important for small computer systems or desktop computers, especially when only some form of a simple plotting routine is sufficient. With the Postscript language becoming popular, there are more and more Postscript laser printers now available. Simple, versatile, low cost plotting routines that can generate output on high quality laser printers are needed and standard FORTRAN language plotting routines using output in Postscript language seems logical. The purpose here is to explain two simple FORTRAN plotting routines that generate output in Postscript language.

  5. Maximum size-density relationships for mixed-hardwood forest stands in New England

    Science.gov (United States)

    Dale S. Solomon; Lianjun Zhang

    2000-01-01

    Maximum size-density relationships were investigated for two mixed-hardwood ecological types (sugar maple-ash and beech-red maple) in New England. Plots meeting type criteria and undergoing self-thinning were selected for each habitat. Using reduced major axis regression, no differences were found between the two ecological types. Pure species plots (the species basal...

  6. Intelligence Constraints on Terrorist Network Plots

    Science.gov (United States)

    Woo, Gordon

    Since 9/11, the western intelligence and law enforcement services have managed to interdict the great majority of planned attacks against their home countries. Network analysis shows that there are important intelligence constraints on the number and complexity of terrorist plots. If two many terrorists are involved in plots at a given time, a tipping point is reached whereby it becomes progressively easier for the dots to be joined and for the conspirators to be arrested, and for the aggregate evidence to secure convictions. Implications of this analysis are presented for the campaign to win hearts and minds.

  7. Multilocational evaluation of white yam genotypes using GGE bi-plot ...

    African Journals Online (AJOL)

    Five new white yam genotypes were evaluated in different locations of major yam producing areas; Umudike, Nsukka, Ubiaja, Abuja and Katsina-Ala, to test the performance and stability of these genotypes across the environments using GGE bi-plot software. The GGE bi-plot generated several graphic bi-plots which ...

  8. On-plot drinking water supplies and health: A systematic review.

    Science.gov (United States)

    Overbo, Alycia; Williams, Ashley R; Evans, Barbara; Hunter, Paul R; Bartram, Jamie

    2016-07-01

    Many studies have found that household access to water supplies near or within the household plot can reduce the probability of diarrhea, trachoma, and other water-related diseases, and it is generally accepted that on-plot water supplies produce health benefits for households. However, the body of research literature has not been analyzed to weigh the evidence supporting this. A systematic review was conducted to investigate the impacts of on-plot water supplies on diarrhea, trachoma, child growth, and water-related diseases, to further examine the relationship between household health and distance to water source and to assess whether on-plot water supplies generate health gains for households. Studies provide evidence that households with on-plot water supplies experience fewer diarrheal and helminth infections and greater child height. Findings suggest that water-washed (hygiene associated) diseases are more strongly impacted by on-plot water access than waterborne diseases. Few studies analyzed the effects of on-plot water access on quantity of domestic water used, hygiene behavior, and use of multiple water sources, and the lack of evidence for these relationships reveals an important gap in current literature. The review findings indicate that on-plot water access is a useful health indicator and benchmark for the progressive realization of the Sustainable Development Goal target of universal safe water access as well as the human right to safe water. Copyright © 2016 Elsevier GmbH. All rights reserved.

  9. Development of biochemical properties in anthropic soil (the study at Třinec–Jahodná plot

    Directory of Open Access Journals (Sweden)

    Karel Marosz

    2013-01-01

    Full Text Available The properties of the anthropic soils formed at the sludge bed “Třinec-Jahodná”. The sludge bed came from a long-time depositing of fly-ash and slag layers. Therefore, the anthropic soil properties obtained their features by both a character of layered substrate and a management of the local land reclamation. The paper presented deals with the rate of an intensity of biological and biochemical soil processes in charge of the fulfilment of plant nourishment demands, and the time viewpoints focusing on the local soil development. The set of enzymatic and biological measurements were chosen for treatments of soil bodies sampled throughout 2007–2008. The study plots inside the sludge bed and the control plot were sampled; the properties of particular horizons were studied. The results proved that the twenty-year-development of soil bodies made the proper conditions for plant nutrition. The positive statement, nevertheless, is directly linked to the presence of trees and shrubs. The vegetation seems to be one of the very crucial factors for a status of the site and maintenance of soil productivity: it affects temperature amplitudes, sensitivity to erosion, a redistribution of soil water, and a humic compounds accumulation. The statistical analyses showed significantly differing results on the study plots with a shorter development and a lower rate of vegetational cover.

  10. 6th International Symposium on Recurrence Plots

    CERN Document Server

    Jr, Jr; Ioana, Cornel; Marwan, Norbert

    2016-01-01

    The chapters in this book originate from the research work and contributions presented at the Sixth International Symposium on Recurrence Plots held in Grenoble, France in June 2015. Scientists from numerous disciplines gathered to exchange knowledge on recent applications and developments in recurrence plots and recurrence quantification analysis. This meeting was remarkable because of the obvious expansion of recurrence strategies (theory) and applications (practice) into ever-broadening fields of science. It discusses real-world systems from various fields, including mathematics, strange attractors, applied physics, physiology, medicine, environmental and earth sciences, as well as psychology and linguistics. Even readers not actively researching any of these particular systems will benefit from discovering how other scientists are finding practical non-linear solutions to specific problems. The book is of interest to an interdisciplinary audience of recurrence plot users and researchers interested in time...

  11. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  12. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  13. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. arXiv Laura++ : a Dalitz plot fitter

    CERN Document Server

    Back, John; Harrison, Paul; Latham, Thomas; O'Hanlon, Daniel; Qian, Wenbin; del Amo Sanchez, Pablo; Craik, Daniel; Ilic, Jelena; Otalora Goicochea, Juan; Puccio, Eugenia; Silva Coutinho, Rafael; Whitehead, Mark

    The Dalitz plot analysis technique has become an increasingly important method in heavy flavour physics. The Laura++ fitter has been developed as a flexible tool that can be used for Dalitz plot analyses in different experimental environments. Explicitly designed for three-body decays of heavy-flavoured mesons to spinless final state particles, it is optimised in order to describe all possible resonant or nonresonant contributions, and to accommodate possible CP violation effects.

  15. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  16. Effects of tree-to-tree variations on sap flux-based transpiration estimates in a forested watershed

    Science.gov (United States)

    Kume, Tomonori; Tsuruta, Kenji; Komatsu, Hikaru; Kumagai, Tomo'omi; Higashi, Naoko; Shinohara, Yoshinori; Otsuki, Kyoichi

    2010-05-01

    To estimate forest stand-scale water use, we assessed how sample sizes affect confidence of stand-scale transpiration (E) estimates calculated from sap flux (Fd) and sapwood area (AS_tree) measurements of individual trees. In a Japanese cypress plantation, we measured Fd and AS_tree in all trees (n = 58) within a 20 × 20 m study plot, which was divided into four 10 × 10 subplots. We calculated E from stand AS_tree (AS_stand) and mean stand Fd (JS) values. Using Monte Carlo analyses, we examined potential errors associated with sample sizes in E, AS_stand, and JS by using the original AS_tree and Fd data sets. Consequently, we defined optimal sample sizes of 10 and 15 for AS_stand and JS estimates, respectively, in the 20 × 20 m plot. Sample sizes greater than the optimal sample sizes did not decrease potential errors. The optimal sample sizes for JS changed according to plot size (e.g., 10 × 10 m and 10 × 20 m), while the optimal sample sizes for AS_stand did not. As well, the optimal sample sizes for JS did not change in different vapor pressure deficit conditions. In terms of E estimates, these results suggest that the tree-to-tree variations in Fd vary among different plots, and that plot size to capture tree-to-tree variations in Fd is an important factor. This study also discusses planning balanced sampling designs to extrapolate stand-scale estimates to catchment-scale estimates.

  17. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    Science.gov (United States)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  18. THERAPIE - THErmix-RAps-Plot-InterfacE. A graphic software for representation of THERMIX-2D results with the interactive plot program RAPS

    International Nuclear Information System (INIS)

    Duensing, P.; Jahn, W.; Rehm, W.

    1986-09-01

    The performance of safety analyses for gas-cooled high temperature reactor power plants requires efficient plot codes for the evaluation and representation of computer results. The report describes the coupling between the thermodynamic simulation code THERMIX and the graphic plot code RAPS via the interface program THERAPIE. Especially the structure and the handling of the interface program are explained as well as the dialogue with the plot code. Further options of the colour graphic system are demonstrated for the representation of temperature distributions in components of HTR concepts (HTR-500). (orig.) [de

  19. Grain size dependent electrical studies on nanocrystalline SnO2

    International Nuclear Information System (INIS)

    Bose, A. Chandra; Thangadurai, P.; Ramasamy, S.

    2006-01-01

    Nanocrystalline tin oxide (n-SnO 2 ) with different grain sizes were synthesized by chemical precipitation method. Size variation was achieved by changing the hydrolysis processing time. Structural phases of the nanocrystalline SnO 2 were identified by X-ray diffraction (XRD). The grain sizes of the prepared n-SnO 2 were found to be in the range 5-20 nm which were estimated using the Scherrer formula and they were confirmed by transmission electron microscopy (TEM) measurements. The electrical properties of nanocrystalline SnO 2 were studied using impedance spectroscopy. The impedance spectroscopy results showed that, in the temperature range between 25 and 650 deg. C, the conductivity has contributions from two different mechanisms, which are attributed to different conduction mechanisms in the grain and the grain boundary regions. This is because of the different relaxation times available for the conduction species in those regions. However, for the temperatures above 300 deg. C, there is no much difference between these two different relaxation times. The Arrhenius plots gave the activation energies for the conduction process in all the samples

  20. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  1. Iron status determination in pregnancy using the Thomas plot.

    Science.gov (United States)

    Weyers, R; Coetzee, M J; Nel, M

    2016-04-01

    Physiological changes during pregnancy affect routine tests for iron deficiency. The reticulocyte haemoglobin equivalent (RET-He) and serum-soluble transferrin receptor (sTfR) assay are newer diagnostic parameters for the detection of iron deficiency, combined in the Thomas diagnostic plot. We used this plot to determine the iron status of pregnant women presenting for their first visit to an antenatal clinic in Bloemfontein, South Africa. Routine laboratory tests (serum ferritin, full blood count and C-reactive protein) and RET-He and sTfR were performed. The iron status was determined using the Thomas plot. For this study, 103 pregnant women were recruited. According to the Thomas plot, 72.8% of the participants had normal iron stores and erythropoiesis. Iron-deficient erythropoiesis was detected in 12.6%. A third of participants were anaemic. Serum ferritin showed excellent sensitivity but poor specificity for detecting depleted iron stores. HIV status had no influence on the iron status of the participants. Our findings reiterate that causes other than iron deficiency should be considered in anaemic individuals. When compared with the Thomas plot, a low serum ferritin is a sensitive but nonspecific indicator of iron deficiency. The Thomas plot may provide useful information to identify pregnant individuals in whom haematologic parameters indicate limited iron availability for erythropoiesis. © 2015 John Wiley & Sons Ltd.

  2. Myth Structure and Media Fiction Plot: An Exploration.

    Science.gov (United States)

    Harless, James D.

    Based on the general research of Joseph Campbell in adventure plots from mythology, the author explores the simplified monomyth plots currently in frequent use in mass media programing. The close relationship of media fiction to mythic stories is established through the analysis of more than 25 stories resulting from media broadcasting. The media…

  3. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  4. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  5. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  7. An imputation/copula-based stochastic individual tree growth model for mixed species Acadian forests: a case study using the Nova Scotia permanent sample plot network

    Directory of Open Access Journals (Sweden)

    John A. KershawJr

    2017-09-01

    Full Text Available Background A novel approach to modelling individual tree growth dynamics is proposed. The approach combines multiple imputation and copula sampling to produce a stochastic individual tree growth and yield projection system. Methods The Nova Scotia, Canada permanent sample plot network is used as a case study to develop and test the modelling approach. Predictions from this model are compared to predictions from the Acadian variant of the Forest Vegetation Simulator, a widely used statistical individual tree growth and yield model. Results Diameter and height growth rates were predicted with error rates consistent with those produced using statistical models. Mortality and ingrowth error rates were higher than those observed for diameter and height, but also were within the bounds produced by traditional approaches for predicting these rates. Ingrowth species composition was very poorly predicted. The model was capable of reproducing a wide range of stand dynamic trajectories and in some cases reproduced trajectories that the statistical model was incapable of reproducing. Conclusions The model has potential to be used as a benchmarking tool for evaluating statistical and process models and may provide a mechanism to separate signal from noise and improve our ability to analyze and learn from large regional datasets that often have underlying flaws in sample design.

  8. Variability of carotid artery measurements on 3-Tesla MRI and its impact on sample size calculation for clinical research.

    Science.gov (United States)

    Syed, Mushabbar A; Oshinski, John N; Kitchen, Charles; Ali, Arshad; Charnigo, Richard J; Quyyumi, Arshed A

    2009-08-01

    Carotid MRI measurements are increasingly being employed in research studies for atherosclerosis imaging. The majority of carotid imaging studies use 1.5 T MRI. Our objective was to investigate intra-observer and inter-observer variability in carotid measurements using high resolution 3 T MRI. We performed 3 T carotid MRI on 10 patients (age 56 +/- 8 years, 7 male) with atherosclerosis risk factors and ultrasound intima-media thickness > or =0.6 mm. A total of 20 transverse images of both right and left carotid arteries were acquired using T2 weighted black-blood sequence. The lumen and outer wall of the common carotid and internal carotid arteries were manually traced; vessel wall area, vessel wall volume, and average wall thickness measurements were then assessed for intra-observer and inter-observer variability. Pearson and intraclass correlations were used in these assessments, along with Bland-Altman plots. For inter-observer variability, Pearson correlations ranged from 0.936 to 0.996 and intraclass correlations from 0.927 to 0.991. For intra-observer variability, Pearson correlations ranged from 0.934 to 0.954 and intraclass correlations from 0.831 to 0.948. Calculations showed that inter-observer variability and other sources of error would inflate sample size requirements for a clinical trial by no more than 7.9%, indicating that 3 T MRI is nearly optimal in this respect. In patients with subclinical atherosclerosis, 3 T carotid MRI measurements are highly reproducible and have important implications for clinical trial design.

  9. Abundance and Size Distribution of Cavity Trees in Second-Growth and Old-Growth Central Hardwood Forests

    Science.gov (United States)

    Zhaofei Fan; Stephen R. Shifley; Martin A. Spetich; Frank R. Thompson III; David R. Larsen

    2005-01-01

    In central hardwood forests, mean cavity-tree abundance increases with increasing standsize class (seedling/sapling, pole, sawtimber, old-growth). However, within a size class, the number of cavity trees is highly variable among 0.1-ha inventory plots. Plots in young stands are most likely to have no cavity trees, but some plots may have more than 50 cavity trees/ha....

  10. Abundance and size distribution of cavity trees in second-growth and old-growth central hardwood forests

    Science.gov (United States)

    Zhaofei Fan; Stephen R. Shifley; Martin A. Spetich; Frank R. Thompson; David R. Larsen

    2005-01-01

    In central hardwood forests, mean cavity-tree abundance increases with increasing standsize class (seedling/sapling, pole, sawtimber, old-growth). However, within a size class, the number of cavity trees is highly variable among 0.1-ha inventory plots. Plots in young stands are most likely to have no cavity trees, but some plots may have more than 50 cavity trees/ha....

  11. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  12. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  13. Application of SWAT99.2 to sensitivity analysis of water balance components in unique plots in a hilly region

    Directory of Open Access Journals (Sweden)

    Jun-feng Dai

    2017-07-01

    Full Text Available Although many sensitivity analyses using the soil and water assessment tool (SWAT in a complex watershed have been conducted, little attention has been paid to the application potential of the model in unique plots. In addition, sensitivity analysis of percolation and evapotranspiration with SWAT has seldom been undertaken. In this study, SWAT99.2 was calibrated to simulate water balance components for unique plots in Southern China from 2000 to 2001, which included surface runoff, percolation, and evapotranspiration. Twenty-one parameters classified into four categories, including meteorological conditions, topographical characteristics, soil properties, and vegetation attributes, were used for sensitivity analysis through one-at-a-time (OAT sampling to identify the factor that contributed most to the variance in water balance components. The results were shown to be different for different plots, with parameter sensitivity indices and ranks varying for different water balance components. Water balance components in the broad-leaved forest and natural grass plots were most sensitive to meteorological conditions, less sensitive to vegetation attributes and soil properties, and least sensitive to topographical characteristics. Compared to those in the natural grass plot, water balance components in the broad-leaved forest plot demonstrated higher sensitivity to the maximum stomatal conductance (GSI and maximum leaf area index (BLAI.

  14. Dalitz plot analysis of B-s(0) -> (D)over-bar(0)K(-)pi(+) decays

    NARCIS (Netherlands)

    Aaij, R.; Adeva, B.; Adinolfi, M.; Affolder, A.; Ajaltouni, Z.; Akar, S.; Albrecht, J.; Alessio, F.; Alexander, M.; Ali, S.; Alkhazov, G.; Alvarez Cartelle, P.; Alves, A. A.; Amato, S.; Amerio, S.; Amhis, Y.; An, L.; Anderlini, L.; Anderson, J.; Andreassen, R.; Andreotti, M.; Andrews, J. E.; Appleby, R. B.; Gutierrez, O. Aquines; Archilli, F.; Artamonov, A.; Artuso, M.; Aslanides, E.; Auriemma, G.; Baalouch, M.; Bachmann, S.; Back, J. J.; Badalov, A.; Baesso, C.; Baldini, W.; Barlow, R. J.; Barschel, C.; Barsuk, S.; Barter, W.; Batozskaya, V.; Battista, V.; Bay, A.; Beaucourt, L.; Beddow, J.; Bedeschi, F.; Bediaga, I.; Belogurov, S.; Belous, K.; Onderwater, G.; Pellegrino, A.

    2014-01-01

    The resonant substructure of B-s(0) -> (D) over bar K-0(-)pi(+) decays is studied with the Dalitz plot analysis technique. The study is based on a data sample corresponding to an integrated luminosity of 3.0 fb(-1) of pp collision data recorded by LHCb. A structure at m((D) over bar K-0(-))

  15. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  16. The 2002 RPA Plot Summary database users manual

    Science.gov (United States)

    Patrick D. Miles; John S. Vissage; W. Brad Smith

    2004-01-01

    Describes the structure of the RPA 2002 Plot Summary database and provides information on generating estimates of forest statistics from these data. The RPA 2002 Plot Summary database provides a consistent framework for storing forest inventory data across all ownerships across the entire United States. The data represents the best available data as of October 2001....

  17. Online plot services for paleomagnetism and rock magnetism

    Science.gov (United States)

    Hatakeyama, T.

    2017-12-01

    In paleomagnetism and rock magnetism, a lot of types of original plots are used for obtained data from measurements. Many researchers in paleomagnetism often use not only general-purpose plotting programs such as Microsoft Excel but also single-purpose tools. A large benefit of using the latter tools is that we can make a beautiful figure for our own data. However, those programs require specific environment for their operation such as type of hardware and platform, type of operation system and its version, libraries for execution and so on. Therefore, it is difficult to share the result and graphics among the collaborators who use different environments on their PCs. Thus, one of the best solution is likely a program operated on popular environment. The most popular is web environment as we all know. Almost all current operating systems have web browsers as standard and all people use them regularly. Now we provide a web-based service plotting paleomagnetic results easily.We develop original programs with a command-line user interface (non-GUI), and we prepared web pages for input of the simple measured data and options and a wrapper script which transfers the entered values to the program. The results, analyzed values and plotted graphs from the program are shown in the HTML page and downloadable. Our plot services are provided in http://mage-p.org/mageplot/. In this talk, we introduce our program and service and discuss the philosophy and efficiency of these services.

  18. PRP: a FORTRAN IV interactive plotting program

    Science.gov (United States)

    Andrew, A. S.; Linde, J.

    A computer program, PRP, has been designed to plot any arithmetic combination selected from a set of major and trace element data on a y- x graph. y and x are defined and entered as a program string (y, x) which is interpreted sequentially. Operators ( +, -, ∗, /, ( unary) , square root, log 10, In c, antilog 10, exponential, integer, absolute value, (,),,) and integer or real numbers may be included. Axis lengths and scales are determined by the user. Five different plotting symbols are available.

  19. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  20. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  1. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  2. The 3D model: explaining densification and deformation mechanisms by using 3D parameter plots.

    Science.gov (United States)

    Picker, Katharina M

    2004-04-01

    The aim of the study was to analyze very differently deforming materials using 3D parameter plots and consequently to gain deeper insights into the densification and deformation process described with the 3D model in order to define an ideal tableting excipient. The excipients used were dicalcium phosphate dihydrate (DCPD), sodium chloride (NaCl), microcrystalline cellulose (MCC), xylitol, mannitol, alpha-lactose monohydrate, maltose, hydroxypropyl methylcellulose (HPMC), sodium carboxymethylcellulose (NaCMC), cellulose acetate (CAC), maize starch, potato starch, pregelatinized starch, and maltodextrine. All of the materials were tableted to graded maximum relative densities (rhorel, max) using an eccentric tableting machine. The data which resulted, namely force, displacement, and time, were analyzed by the application of 3D modeling. Different particle size fractions of DCPD, CAC, and MCC were analyzed in addition. Brittle deforming materials such as DCPD exhibited a completely different 3D parameter plot, with low time plasticity, d, and low pressure plasticity, e, and a strong decrease in omega values when densification increased, in contrast to the plastically deforming MCC, which had much higher d, e, and omega values. e and omega values changed only slightly when densification increased for MCC. NaCl showed less of a decrease in omega values than DCPD did, and the d and e values were between those of MCC and DCPD. The sugar alcohols, xylitol and mannitol, behaved in a similar fashion to sodium chloride. This is also valid for the crystalline sugars, alpha-lactose monohydrate, and maltose. However, the sugars are more brittle than the sugar alcohols. The cellulose derivatives, HPMC, NaCMC, and CAC, are as plastic as MCC, however, their elasticity depends on substitution indicated by lower (more elastic) or higher (less elastic) omega values. The native starches, maize starch and potato starch, are very elastic, and pregelatinized starch and maltodextrine are

  3. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  4. Laser-induced breakdown spectroscopy for detection of heavy metals in environmental samples

    Science.gov (United States)

    Wisbrun, Richard W.; Schechter, Israel; Niessner, Reinhard; Schroeder, Hartmut

    1993-03-01

    The application of LIBS technology as a sensor for heavy metals in solid environmental samples has been studied. This specific application introduces some new problems in the LIBS analysis. Some of them are related to the particular distribution of contaminants in the grained samples. Other problems are related to mechanical properties of the samples and to general matrix effects, like the water and organic fibers content of the sample. An attempt has been made to optimize the experimental set-up for the various involved parameters. The understanding of these factors has enabled the adjustment of the technique to the substrates of interest. The special importance of the grain size and of the laser-induced aerosol production is pointed out. Calibration plots for the analysis of heavy metals in diverse sand and soil samples have been carried out. The detection limits are shown to be usually below the recent regulation restricted concentrations.

  5. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  6. The Roles of Macrobenthic Mollusks as Bioindicator in Response to Environmental Disturbance : Cumulative k-dominance curves and bubble plots ordination approaches

    Science.gov (United States)

    Putro, Sapto P.; Muhammad, Fuad; Aininnur, Amalia; Widowati; Suhartana

    2017-02-01

    Floating net cage is one of the aquaculture practice operated in Indonesian coastal areas that has been growing rapidly over the last two decades. This study is aimed to assess the roles of macrobenthic mollusks as bioindicator in response to environmental disturbance caused by fish farming activities, and compare the samples within the locations using graphical methods. The research was done at the floating net cage fish farming area in the Awerange Gulf, South Sulawesi, Indonesia at the coordinates between 79°0500‧- 79°1500‧ LS and 953°1500‧- 953°2000‧ BT, at the polyculture and reference areas, which was located 1 km away from farming area. Sampling period was conducted between October 2014 to June 2015. The sediment samples were taken from the two locations with two sampling time and three replicates using Van Veen Grab for biotic and abiotic assessment. Mollusks as biotic parameter were fixed using 4% formalin solution and were preserved using 70% ethanol solution after 1mm mesh size. The macrobenthic mollusks were found as many as 15 species consisting of 14 families and 2 classes (gastropods and bivalves). Based on cumulative k-dominance analysis projected on each station, the line of station K3T1 (reference area; first sampling time) and KJAB P3T2 (polyculture area; second sampling time) are located below others curves, indicating the highest evenness and diversity compared to the other stations, whereas station K2T1 (reference area; first sampling time) and K3T2 (polyculture area, second sampling time) are located on the top, indicate the lowest value of evenness and diversity. Based on the bubble plots NMDS ordination, the four dominant taxa/species did not clearly show involvement in driving/shifting the ordinate position of station on the graph, except T. agilis. However, the two species showed involvement in driving/shifting the ordinate position of two stations of the reference areas from the first sampling time by Rynoclavis sordidula

  7. Surface radiological investigations at environmental research area 11, 137Cs- and 60Co-contaminated plots at Oak Ridge National Laboratory, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    Uziel, M.S.; Tiner, P.F.; Williams, J.K.

    1993-02-01

    A surface radiological investigation at the 137 Cs- and 6O Co-contaminated forest area (Chestnut Ridge east and west plots) was conducted from January 1992 through August 1992. Results of the survey revealed numerous spots and small areas of surface contamination that followed the original placement of feeders used for 6O Co- and 137 Cs-labeled seeds in a 1969--1970 study. Surface gamma exposure rates reached 380 μR/h at the east plot and 400 μR/h at the west plot, but approximately one-half and one- third, respectively, of the identified anomalies did not exceed 39 μR/h. Results of soil sample analyses demonstrated that 137 Cs and 6O Co were responsible for the elevated radiation levels. Radionuclides were found below the surface at soil sample locations, in some cases at depths below 18 in. The same pattern of subsurface contamination may be present at other elevated surface spots at both plots. These survey results show that current radiological conditions at the site remain an environmental problem. Recommendations for corrective actions are included

  8. Selection of Plot Remeasurement in an Annual Inventory

    Science.gov (United States)

    Mark H. Hansen; Hans T. Schreuder; Dave Heinzen

    2000-01-01

    A plot selection approach is proposed based on experience from the Annual Forest Inventory System (AFIS) in the Aspen-Birch Unit of northestern Minnesota. The emphasisis on a mixture of strategies. Although the Agricultural Act of 1998 requires that a fixed 20 percent of plots be measured each year in each state, sooner or later we will need to vary the scheme to...

  9. Linear titration plot for the determination of boron in the primary coolant of a pressurized water reactor

    International Nuclear Information System (INIS)

    Midgley, D.; Gatford, C.

    1992-01-01

    A linear titration plot method has been devised for the determination of boron as boric acid in partly neutralized solution, such as occurs in the primary coolant of pressurized water reactors. The total boron and the alkali in the sample are determined simultaneously. Although it is not essential to add mannitol in this method, it is more accurate when the solution is saturated with mannitol. Comparisons are made with other modes of titration: Gran plots, first and second differential potentiometric titrations and indicator titrations. None of these gives the total boron directly in partly neutralized solutions. (author)

  10. Manual transportation within the plot and physical damages to bananas

    Directory of Open Access Journals (Sweden)

    Magalhães Mário Jorge Maia de

    2004-01-01

    Full Text Available The manual transportation of banana bunches within plots provokes physical damages to fruits compromising their quality. To assess the influence of the distance banana bunches travel on the shoulders of harvesters within the plot, on the incidence of physical damages present on the peel of fruits of the Nanicão cultivar, two experiments were carried out in the Vale do Ribeira region (SP, in sites with slope < 1%. Each experiment divided the plot in different distance bands, two of which were included in this study: one located far away from the collection roads (30-50 m and 80-100 m distance bands and another in an intermediate position (70-80 m and 130-150 m distance bands. For each distance band, six banana bunches of 36 mm gauged fruits were randomly sampled. Four banana hands were cut from the middle region of each bunch and ten fruits were assessed per hand, totaling 240 fruits per treatment. Bunches were harvested at the same maturity degree and those served as control were not transported. A total of 1440 fruits was assessed in the two experiments. The physical damages on the fruit surface were graded on a scale with 6 divisions: 0-0.25 cm²; 0.25-0.5 cm²; 0.5-1.0 cm²; 1.0-1.5 cm²; 1.5-2.0 cm²; 2.0-2.5 cm². The bunches transported on the shoulders of harvesters on distances over 70 m suffered increased (P < 0.01 damaged area. Most damages presented areas up to 0.5 cm².

  11. In vitro rumen feed degradability assessed with DaisyII and batch culture: effect of sample size

    Directory of Open Access Journals (Sweden)

    Stefano Schiavon

    2010-01-01

    Full Text Available In vitro degradability with DaisyII (D equipment is commonly performed with 0.5g of feed sample into each filter bag. Literature reported that a reduction of the ratio of sample size to bag surface could facilitate the release of soluble or fine particulate. A reduction of sample size to 0.25 g could improve the correlation between the measurements provided by D and the conventional batch culture (BC. This hypothesis was screened by analysing the results of 2 trials. In trial 1, 7 feeds were incubated for 48h with rumen fluid (3 runs x 4 replications both with D (0.5g/bag and BC; the regressions between the mean values provided for the various feeds in each run by the 2 methods either for NDF (NDFd and in vitro true DM (IVTDMD degradability, had R2 of 0.75 and 0.92 and RSD of 10.9 and 4.8%, respectively. In trial 2, 4 feeds were incubated (2 runs x 8 replications with D (0.25 g/bag and BC; the corresponding regressions for NDFd and IVTDMD showed R2 of 0.94 and 0.98 and RSD of 3.0 and 1.3%, respectively. A sample size of 0.25 g improved the precision of the measurements obtained with D.

  12. SutraPlot, a graphical post-processor for SUTRA, a model for ground-water flow with solute or energy transport

    Science.gov (United States)

    Souza, W.R.

    1999-01-01

    This report documents a graphical display post-processor (SutraPlot) for the U.S. Geological Survey Saturated-Unsaturated flow and solute or energy TRAnsport simulation model SUTRA, Version 2D3D.1. This version of SutraPlot is an upgrade to SutraPlot for the 2D-only SUTRA model (Souza, 1987). It has been modified to add 3D functionality, a graphical user interface (GUI), and enhanced graphic output options. Graphical options for 2D SUTRA (2-dimension) simulations include: drawing the 2D finite-element mesh, mesh boundary, and velocity vectors; plots of contours for pressure, saturation, concentration, and temperature within the model region; 2D finite-element based gridding and interpolation; and 2D gridded data export files. Graphical options for 3D SUTRA (3-dimension) simulations include: drawing the 3D finite-element mesh; plots of contours for pressure, saturation, concentration, and temperature in 2D sections of the 3D model region; 3D finite-element based gridding and interpolation; drawing selected regions of velocity vectors (projected on principal coordinate planes); and 3D gridded data export files. Installation instructions and a description of all graphic options are presented. A sample SUTRA problem is described and three step-by-step SutraPlot applications are provided. In addition, the methodology and numerical algorithms for the 2D and 3D finite-element based gridding and interpolation, developed for SutraPlot, are described. 1

  13. Surface stabilization and revegetation test plots

    International Nuclear Information System (INIS)

    Sackschewsky, M.R.; Kemp, C.J.; Hayward, W.M.

    1993-09-01

    Westinghouse Hanford Company Decommissioning and Decontamination Engineering Group and Environmental Technology and Assessment Groups are developing new technologies to improve revegetation techniques for interim stabilization control over underground waste sites within the Radiation Area Remedial Action Program. Successful revegetation is an integral aspect of waste isolation strategy. Unfortunately, revegetation can be very difficult to achieve on the Hanford Site due to several factors: low annual precipitation, unpredictable timing of precipitation, low fertility of available soils, and coarse physical texture of soils covering waste sites. The tests in this report were performed during fiscal years 1992 and 1993 and include the use of two soil sealants in combination with bare soil and a soil/compost mixture and a comparison of a wheatgrass mixture and a native seed mixture. Hydroprobe access ports were placed in one-half of the test plots and moisture data was collected. Soil fertility and plant community characteristics were monitored during the two years of the test. During the first year all sites with compost provided additional fertility and retained greater amounts of soil moisture than noncomposted sites. The use of Enduraseal soil fixative provided greater soil moisture than the use of Aerospray-77 soil fixative. During the second year the use of compost and soil fixative's had a lesser effect on soil moisture. During late summer periods all treatments had very similar soil moisture profiles. The use of compost greatly increased vegetative cover and soil fertility in comparison to sites that had no compost added. Testing of the seed mixtures found that Siberian wheatgrass and Sandberg's bluegrass were the most dominant of the seeded species observed. All plots exhibited a dominant plant cover of volunteer cheatgrass. Biomass production was significantly greater on plots with compost than on the noncomposted plots

  14. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  15. The Use of Plackett-Burman Designs to Construct Split Plot Designs.

    NARCIS (Netherlands)

    Kulahci, M.; Bisgaard, S.

    2005-01-01

    Abstract When some factors are hard to change and others are relatively easier, split-plot experiments are often an economic alternative to fully randomized designs. Split-plot experiments, with their structure of subplot arrays imbedded within whole-plot arrays, have a tendency to become large,

  16. The confounding effect of population structure on bayesian skyline plot inferences of demographic history

    DEFF Research Database (Denmark)

    Heller, Rasmus; Chikhi, Lounes; Siegismund, Hans

    2013-01-01

    Many coalescent-based methods aiming to infer the demographic history of populations assume a single, isolated and panmictic population (i.e. a Wright-Fisher model). While this assumption may be reasonable under many conditions, several recent studies have shown that the results can be misleading...... when it is violated. Among the most widely applied demographic inference methods are Bayesian skyline plots (BSPs), which are used across a range of biological fields. Violations of the panmixia assumption are to be expected in many biological systems, but the consequences for skyline plot inferences...... the best scheme for inferring demographic change over a typical time scale. Analyses of data from a structured African buffalo population demonstrate how BSP results can be strengthened by simulations. We recommend that sample selection should be carefully considered in relation to population structure...

  17. Patterns and dynamics of Cs-137 soil contamination on the plot scale of the Bryansk Region (Russia): the role of processes, connectivity

    Science.gov (United States)

    Linnik, Vitaly; Sokolov, Alexander; Saveliev, Anatoly

    2014-05-01

    Character of surface and subsurface water flow was studied using 137Cs as a marker on a forest plot with a size of 50x70 m in the western part of the Bryansk Region, situated in the lower part of a slope that has a southern exposition and is drained by a stream. The range of altitudinal levels of plot amounts to 152,68-154,68 m. The plot was surveyed with a terrain contour level equalling to 20 sm. The data of the survey were used to make a digital elevation model (DEM). The plot has a undulated relief with a general surface slope in southern and southeast directions, with some depressions ranging from dozens of centimeters to several meters and 20-40 cm deep, in which groundwater comes up straight to the surface in spring. 137Cs distribution was investigated using field radiometry survey by different steps: 10m for the total plot, and 2 m for the two local plots with the size of 10x10 m, and 0,5 m step for a subplot with the size of 3x4 m. The total quantity of measuring points was more than 200. For the total plot 137Cs mean value was 950 kBq/m2, min - 463 kBq/m2 and max- 1706 kBq/m2. Local plot in the depression, was characterized by the following levels of the 137Cs pollution: mean, max and min value accordingly were equal 682, 1280, 281 kBq/m2. At the initial period of the accident at the Chernobyl NPP (April-May 1986) the quantity of 137Cs water soluble form could reach 50%, therefore 137Cs could have been carried out because of a surface and subsurface water flow. The dependence of 137Cs distribution on microrelief has been examined. Values of Laplace operator obtained for a detailed (step of 0,1 m, Laplace1) and a generalized grid (step 0,25 m, Laplace2), as well as altitude were regarded as parameters which control 137Cs redistribution. Negative Laplacian corresponds to wash-out zones (convex microrelief) while positive Laplacian corresponds to accumulation zones (concave microrelief). To determine the relation of 137Cs distribution to the mentioned relief

  18. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  19. Checking the Adequacy of Fit of Models from Split-Plot Designs

    DEFF Research Database (Denmark)

    Almini, A. A.; Kulahci, Murat; Montgomery, D. C.

    2009-01-01

    models. In this article, we propose the computation of two R-2, R-2-adjusted, prediction error sums of squares (PRESS), and R-2-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types......One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot...... of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have...

  20. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions