WorldWideScience

Sample records for sampling large numbers

  1. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  2. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  3. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  4. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  5. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  6. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  7. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  8. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  9. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  10. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  11. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  12. 21 CFR 203.38 - Sample lot or control numbers; labeling of sample units.

    Science.gov (United States)

    2010-04-01

    ... numbers; labeling of sample units. (a) Lot or control number required on drug sample labeling and sample... identifying lot or control number that will permit the tracking of the distribution of each drug sample unit... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Sample lot or control numbers; labeling of sample...

  13. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Thermal convection for large Prandtl numbers

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef

    2001-01-01

    The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer

  15. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  16. Forecasting the Number of Soil Samples Required to Reduce Remediation Cost Uncertainty

    OpenAIRE

    Demougeot-Renard, Hélène; de Fouquet, Chantal; Renard, Philippe

    2008-01-01

    Sampling scheme design is an important step in the management of polluted sites. It largely controls the accuracy of remediation cost estimates. In practice, however, sampling is seldom designed to comply with a given level of remediation cost uncertainty. In this paper, we present a new technique that allows one to estimate of the number of samples that should be taken at a given stage of investigation to reach a forecasted level of accuracy. The uncertainty is expressed both in terms of vol...

  17. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  18. A review of methods for sampling large airborne particles and associated radioactivity

    International Nuclear Information System (INIS)

    Garland, J.A.; Nicholson, K.W.

    1990-01-01

    Radioactive particles, tens of μm or more in diameter, are unlikely to be emitted directly from nuclear facilities with exhaust gas cleansing systems, but may arise in the case of an accident or where resuspension from contaminated surfaces is significant. Such particles may dominate deposition and, according to some workers, may contribute to inhalation doses. Quantitative sampling of large airborne particles is difficult because of their inertia and large sedimentation velocities. The literature describes conditions for unbiased sampling and the magnitude of sampling errors for idealised sampling inlets in steady winds. However, few air samplers for outdoor use have been assessed for adequacy of sampling. Many size selective sampling methods are found in the literature but few are suitable at the low concentrations that are often encountered in the environment. A number of approaches for unbiased sampling of large particles have been found in the literature. Some are identified as meriting further study, for application in the measurement of airborne radioactivity. (author)

  19. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  20. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  1. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  2. Getting DNA copy numbers without control samples.

    Science.gov (United States)

    Ortiz-Estevez, Maria; Aramburu, Ander; Rubio, Angel

    2012-08-16

    The selection of the reference to scale the data in a copy number analysis has paramount importance to achieve accurate estimates. Usually this reference is generated using control samples included in the study. However, these control samples are not always available and in these cases, an artificial reference must be created. A proper generation of this signal is crucial in terms of both noise and bias.We propose NSA (Normality Search Algorithm), a scaling method that works with and without control samples. It is based on the assumption that genomic regions enriched in SNPs with identical copy numbers in both alleles are likely to be normal. These normal regions are predicted for each sample individually and used to calculate the final reference signal. NSA can be applied to any CN data regardless the microarray technology and preprocessing method. It also finds an optimal weighting of the samples minimizing possible batch effects. Five human datasets (a subset of HapMap samples, Glioblastoma Multiforme (GBM), Ovarian, Prostate and Lung Cancer experiments) have been analyzed. It is shown that using only tumoral samples, NSA is able to remove the bias in the copy number estimation, to reduce the noise and therefore, to increase the ability to detect copy number aberrations (CNAs). These improvements allow NSA to also detect recurrent aberrations more accurately than other state of the art methods. NSA provides a robust and accurate reference for scaling probe signals data to CN values without the need of control samples. It minimizes the problems of bias, noise and batch effects in the estimation of CNs. Therefore, NSA scaling approach helps to better detect recurrent CNAs than current methods. The automatic selection of references makes it useful to perform bulk analysis of many GEO or ArrayExpress experiments without the need of developing a parser to find the normal samples or possible batches within the data. The method is available in the open-source R package

  3. CO2 isotope analyses using large air samples collected on intercontinental flights by the CARIBIC Boeing 767

    NARCIS (Netherlands)

    Assonov, S.S.; Brenninkmeijer, C.A.M.; Koeppel, C.; Röckmann, T.

    2009-01-01

    Analytical details for 13C and 18O isotope analyses of atmospheric CO2 in large air samples are given. The large air samples of nominally 300 L were collected during the passenger aircraft-based atmospheric chemistry research project CARIBIC and analyzed for a large number of trace gases and

  4. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  5. Getting DNA copy numbers without control samples

    Directory of Open Access Journals (Sweden)

    Ortiz-Estevez Maria

    2012-08-01

    Full Text Available Abstract Background The selection of the reference to scale the data in a copy number analysis has paramount importance to achieve accurate estimates. Usually this reference is generated using control samples included in the study. However, these control samples are not always available and in these cases, an artificial reference must be created. A proper generation of this signal is crucial in terms of both noise and bias. We propose NSA (Normality Search Algorithm, a scaling method that works with and without control samples. It is based on the assumption that genomic regions enriched in SNPs with identical copy numbers in both alleles are likely to be normal. These normal regions are predicted for each sample individually and used to calculate the final reference signal. NSA can be applied to any CN data regardless the microarray technology and preprocessing method. It also finds an optimal weighting of the samples minimizing possible batch effects. Results Five human datasets (a subset of HapMap samples, Glioblastoma Multiforme (GBM, Ovarian, Prostate and Lung Cancer experiments have been analyzed. It is shown that using only tumoral samples, NSA is able to remove the bias in the copy number estimation, to reduce the noise and therefore, to increase the ability to detect copy number aberrations (CNAs. These improvements allow NSA to also detect recurrent aberrations more accurately than other state of the art methods. Conclusions NSA provides a robust and accurate reference for scaling probe signals data to CN values without the need of control samples. It minimizes the problems of bias, noise and batch effects in the estimation of CNs. Therefore, NSA scaling approach helps to better detect recurrent CNAs than current methods. The automatic selection of references makes it useful to perform bulk analysis of many GEO or ArrayExpress experiments without the need of developing a parser to find the normal samples or possible batches within the

  6. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  7. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  8. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  9. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  10. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  12. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  13. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  14. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  15. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  16. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  17. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  18. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  19. Prediction of the number of 14 MeV neutron elastically scattered from large sample of aluminium using Monte Carlo simulation method

    International Nuclear Information System (INIS)

    Husin Wagiran; Wan Mohd Nasir Wan Kadir

    1997-01-01

    In neutron scattering processes, the effect of multiple scattering is to cause an effective increase in the measured cross-sections due to increase on the probability of neutron scattering interactions in the sample. Analysis of how the effective cross-section varies with thickness is very complicated due to complicated sample geometries and the variations of scattering cross-section with energy. Monte Carlo method is one of the possible method for treating the multiple scattering processes in the extended sample. In this method a lot of approximations have to be made and the accurate data of microscopic cross-sections are needed at various angles. In the present work, a Monte Carlo simulation programme suitable for a small computer was developed. The programme was capable to predict the number of neutrons scattered from various thickness of aluminium samples at all possible angles between 00 to 36011 with 100 increments. In order to make the the programme not too complicated and capable of being run on microcomputer with reasonable time, the calculations was done in two dimension coordinate system. The number of neutrons predicted from this model show in good agreement with previous experimental results

  20. Support, shape and number of replicate samples for tree foliage analysis

    NARCIS (Netherlands)

    Luyssaert, Sebastiaan; Mertens, Jan; Raitio, Hannu

    Many fundamental features of a sampling program are determined by the heterogeneity of the object under study and the settings for the error (α), the power (β), the effect size (ES), the number of replicate samples, and sample support, which is a feature that is often overlooked. The number of

  1. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  2. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  3. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  4. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  5. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  6. Large area synchrotron X-ray fluorescence mapping of biological samples

    International Nuclear Information System (INIS)

    Kempson, I.; Thierry, B.; Smith, E.; Gao, M.; De Jonge, M.

    2014-01-01

    Large area mapping of inorganic material in biological samples has suffered severely from prohibitively long acquisition times. With the advent of new detector technology we can now generate statistically relevant information for studying cell populations, inter-variability and bioinorganic chemistry in large specimen. We have been implementing ultrafast synchrotron-based XRF mapping afforded by the MAIA detector for large area mapping of biological material. For example, a 2.5 million pixel map can be acquired in 3 hours, compared to a typical synchrotron XRF set-up needing over 1 month of uninterrupted beamtime. Of particular focus to us is the fate of metals and nanoparticles in cells, 3D tissue models and animal tissues. The large area scanning has for the first time provided statistically significant information on sufficiently large numbers of cells to provide data on intercellular variability in uptake of nanoparticles. Techniques such as flow cytometry generally require analysis of thousands of cells for statistically meaningful comparison, due to the large degree of variability. Large area XRF now gives comparable information in a quantifiable manner. Furthermore, we can now image localised deposition of nanoparticles in tissues that would be highly improbable to 'find' by typical XRF imaging. In addition, the ultra fast nature also makes it viable to conduct 3D XRF tomography over large dimensions. This technology avails new opportunities in biomonitoring and understanding metal and nanoparticle fate ex-vivo. Following from this is extension to molecular imaging through specific anti-body targeted nanoparticles to label specific tissues and monitor cellular process or biological consequence

  7. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  8. Teaching Multiplication of Large Positive Whole Numbers Using ...

    African Journals Online (AJOL)

    This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...

  9. Lovelock inflation and the number of large dimensions

    CERN Document Server

    Ferrer, Francesc

    2007-01-01

    We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.

  10. Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

    Directory of Open Access Journals (Sweden)

    Huilin Huang

    2014-01-01

    Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

  11. Waardenburg syndrome: Novel mutations in a large Brazilian sample.

    Science.gov (United States)

    Bocángel, Magnolia Astrid Pretell; Melo, Uirá Souto; Alves, Leandro Ucela; Pardono, Eliete; Lourenço, Naila Cristina Vilaça; Marcolino, Humberto Vicente Cezar; Otto, Paulo Alberto; Mingroni-Netto, Regina Célia

    2018-06-01

    This paper deals with the molecular investigation of Waardenburg syndrome (WS) in a sample of 49 clinically diagnosed probands (most from southeastern Brazil), 24 of them having the type 1 (WS1) variant (10 familial and 14 isolated cases) and 25 being affected by the type 2 (WS2) variant (five familial and 20 isolated cases). Sequential Sanger sequencing of all coding exons of PAX3, MITF, EDN3, EDNRB, SOX10 and SNAI2 genes, followed by CNV detection by MLPA of PAX3, MITF and SOX10 genes in selected cases revealed many novel pathogenic variants. Molecular screening, performed in all patients, revealed 19 causative variants (19/49 = 38.8%), six of them being large whole-exon deletions detected by MLPA, seven (four missense and three nonsense substitutions) resulting from single nucleotide substitutions (SNV), and six representing small indels. A pair of dizygotic affected female twins presented the c.430delC variant in SOX10, but the mutation, imputed to gonadal mosaicism, was not found in their unaffected parents. At least 10 novel causative mutations, described in this paper, were found in this Brazilian sample. Copy-number-variation detected by MLPA identified the causative mutation in 12.2% of our cases, corresponding to 31.6% of all causative mutations. In the majority of cases, the deletions were sporadic, since they were not present in the parents of isolated cases. Our results, as a whole, reinforce the fact that the screening of copy-number-variants by MLPA is a powerful tool to identify the molecular cause in WS patients. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  12. Analysis of large soil samples for actinides

    Science.gov (United States)

    Maxwell, III; Sherrod, L [Aiken, SC

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  13. [Effects of sampling plot number on tree species distribution prediction under climate change].

    Science.gov (United States)

    Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu

    2013-05-01

    Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.

  14. Support, shape and number of replicate samples for tree foliage analysis.

    Science.gov (United States)

    Luyssaert, Sebastiaan; Mertens, Jan; Raitio, Hannu

    2003-06-01

    Many fundamental features of a sampling program are determined by the heterogeneity of the object under study and the settings for the error (alpha), the power (beta), the effect size (ES), the number of replicate samples, and sample support, which is a feature that is often overlooked. The number of replicates, alpha, beta, ES, and sample support are interconnected. The effect of the sample support and its shape on the required number of replicate samples was investigated by means of a resampling method. The method was applied to a simulated distribution of Cd in the crown of a Salix fragilis L. tree. Increasing the dimensions of the sample support results in a decrease in the variance of the element concentration under study. Analysis of the variance is often the foundation of statistical tests, therefore, valid statistical testing requires the use of a fixed sample support during the experiment. This requirement might be difficult to meet in time-series analyses and long-term monitoring programs. Sample supports have their largest dimension in the direction with the largest heterogeneity, i.e. the direction representing the crown height, and this will give more accurate results than supports with other shapes. Taking the relationships between the sample support and the variance of the element concentrations in tree crowns into account provides guidelines for sampling efficiency in terms of precision and costs. In terms of time, the optimal support to test whether the average Cd concentration of the crown exceeds a threshold value is 0.405 m3 (alpha = 0.05, beta = 0.20, ES = 1.0 mg kg(-1) dry mass). The average weight of this support is 23 g dry mass, and 11 replicate samples need to be taken. It should be noted that in this case the optimal support applies to Cd under conditions similar to those of the simulation, but not necessarily all the examinations for this tree species, element, and hypothesis test.

  15. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  16. Factors associated with number of duodenal samples obtained in suspected celiac disease.

    Science.gov (United States)

    Shamban, Leonid; Sorser, Serge; Naydin, Stan; Lebwohl, Benjamin; Shukr, Mousa; Wiemann, Charlotte; Yevsyukov, Daniel; Piper, Michael H; Warren, Bradley; Green, Peter H R

    2017-12-01

     Many people with celiac disease are undiagnosed and there is evidence that insufficient duodenal samples may contribute to underdiagnosis. The aims of this study were to investigate whether more samples leads to a greater likelihood of a diagnosis of celiac disease and to elucidate factors that influence the number of samples collected.  We identified patients from two community hospitals who were undergoing duodenal biopsy for indications (as identified by International Classification of Diseases code) compatible with possible celiac disease. Three cohorts were evaluated: no celiac disease (NCD, normal villi), celiac disease (villous atrophy, Marsh score 3), and possible celiac disease (PCD, Marsh score celiac disease had a median of 4 specimens collected. The percentage of patients diagnosed with celiac disease with one sample was 0.3 % compared with 12.8 % of those with six samples ( P  = 0.001). Patient factors that positively correlated with the number of samples collected were endoscopic features, demographic details, and indication ( P  = 0.001). Endoscopist factors that positively correlated with the number of samples collected were absence of a trainee, pediatric gastroenterologist, and outpatient setting ( P  celiac disease significantly increased with six samples. Multiple factors influenced whether adequate biopsies were taken. Adherence to guidelines may increase the diagnosis rate of celiac disease.

  17. Sample-based Attribute Selective AnDE for Large Data

    DEFF Research Database (Denmark)

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Henc...

  18. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  19. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  20. A spinner magnetometer for large Apollo lunar samples

    Science.gov (United States)

    Uehara, M.; Gattacceca, J.; Quesnel, Y.; Lepaulard, C.; Lima, E. A.; Manfredi, M.; Rochette, P.

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10-7 Am2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  1. A spinner magnetometer for large Apollo lunar samples.

    Science.gov (United States)

    Uehara, M; Gattacceca, J; Quesnel, Y; Lepaulard, C; Lima, E A; Manfredi, M; Rochette, P

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10 -7 Am 2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  2. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  3. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  4. Large Sample Neutron Activation Analysis: A Challenge in Cultural Heritage Studies

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Tzika, F.

    2007-01-01

    Large sample neutron activation analysis compliments and significantly extends the analytical tools available for cultural heritage and authentication studies providing unique applications of non-destructive, multi-element analysis of materials that are too precious to damage for sampling purposes, representative sampling of heterogeneous materials or even analysis of whole objects. In this work, correction factors for neutron self-shielding, gamma-ray attenuation and volume distribution of the activity in large volume samples composed of iron and ceramic material were derived. Moreover, the effect of inhomogeneity on the accuracy of the technique was examined

  5. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  6. On Independence for Capacities with Law of Large Numbers

    OpenAIRE

    Huang, Weihuan

    2017-01-01

    This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.

  7. Automatic trajectory measurement of large numbers of crowded objects

    Science.gov (United States)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  8. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  9. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    Science.gov (United States)

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  10. Large scale sample management and data analysis via MIRACLE

    DEFF Research Database (Denmark)

    Block, Ines; List, Markus; Pedersen, Marlene Lemvig

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. In the past years the technology advanced based on improved methods and protocols concerning sample preparation and printing, antibody selection, optimization...... of staining conditions and mode of signal analysis. However, the sample management and data analysis still poses challenges because of the high number of samples, sample dilutions, customized array patterns, and various programs necessary for array construction and data processing. We developed...... a comprehensive and user-friendly web application called MIRACLE (MIcroarray R-based Analysis of Complex Lysate Experiments), which bridges the gap between sample management and array analysis by conveniently keeping track of the sample information from lysate preparation, through array construction and signal...

  11. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    2016-06-18

    RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...

  12. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  13. 105-DR Large Sodium Fire Facility decontamination, sampling, and analysis plan

    International Nuclear Information System (INIS)

    Knaus, Z.C.

    1995-01-01

    This is the decontamination, sampling, and analysis plan for the closure activities at the 105-DR Large Sodium Fire Facility at Hanford Reservation. This document supports the 105-DR Large Sodium Fire Facility Closure Plan, DOE-RL-90-25. The 105-DR LSFF, which operated from about 1972 to 1986, was a research laboratory that occupied the former ventilation supply room on the southwest side of the 105-DR Reactor facility in the 100-D Area of the Hanford Site. The LSFF was established to investigate fire fighting and safety associated with alkali metal fires in the liquid metal fast breeder reactor facilities. The decontamination, sampling, and analysis plan identifies the decontamination procedures, sampling locations, any special handling requirements, quality control samples, required chemical analysis, and data validation needed to meet the requirements of the 105-DR Large Sodium Fire Facility Closure Plan in compliance with the Resource Conservation and Recovery Act

  14. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  15. Large sample NAA facility and methodology development

    International Nuclear Information System (INIS)

    Roth, C.; Gugiu, D.; Barbos, D.; Datcu, A.; Aioanei, L.; Dobrea, D.; Taroiu, I. E.; Bucsa, A.; Ghinescu, A.

    2013-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) facility has been developed at the TRIGA- Annular Core Pulsed Reactor (ACPR) operated by the Institute for Nuclear Research in Pitesti, Romania. The central irradiation cavity of the ACPR core can accommodate a large irradiation device. The ACPR neutron flux characteristics are well known and spectrum adjustment techniques have been successfully applied to enhance the thermal component of the neutron flux in the central irradiation cavity. An analysis methodology was developed by using the MCNP code in order to estimate counting efficiency and correction factors for the major perturbing phenomena. Test experiments, comparison with classical instrumental neutron activation analysis (INAA) methods and international inter-comparison exercise have been performed to validate the new methodology. (authors)

  16. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...

  17. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  18. Quasi-isodynamic configuration with large number of periods

    International Nuclear Information System (INIS)

    Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.

    2005-01-01

    It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)

  19. Utilization of AHWR critical facility for research and development work on large sample NAA

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Reddy, A.V.R.; Verma, S.K.; De, S.K.

    2014-01-01

    The graphite reflector position of AHWR critical facility (CF) was utilized for analysis of large size (g-kg scale) samples using internal mono standard neutron activation analysis (IM-NAA). The reactor position was characterized by cadmium ratio method using In monitor for total flux and sub cadmium to epithermal flux ratio (f). Large sample neutron activation analysis (LSNAA) work was carried out for samples of stainless steel, ancient and new clay potteries and dross. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated. Radioactive assay was carried out using high resolution gamma ray spectrometry. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. Concentrations of Au and Ag were determined in not so homogeneous three large size samples of dross. An X-Z rotary scanning unit has been installed for counting large and not so homogeneous samples. (author)

  20. CHRONICITY OF DEPRESSION AND MOLECULAR MARKERS IN A LARGE SAMPLE OF HAN CHINESE WOMEN.

    Science.gov (United States)

    Edwards, Alexis C; Aggen, Steven H; Cai, Na; Bigdeli, Tim B; Peterson, Roseann E; Docherty, Anna R; Webb, Bradley T; Bacanu, Silviu-Alin; Flint, Jonathan; Kendler, Kenneth S

    2016-04-25

    Major depressive disorder (MDD) has been associated with changes in mean telomere length and mitochondrial DNA (mtDNA) copy number. This study investigates if clinical features of MDD differentially impact these molecular markers. Data from a large, clinically ascertained sample of Han Chinese women with recurrent MDD were used to examine whether symptom presentation, severity, and comorbidity were related to salivary telomere length and/or mtDNA copy number (maximum N = 5,284 for both molecular and phenotypic data). Structural equation modeling revealed that duration of longest episode was positively associated with mtDNA copy number, while earlier age of onset of most severe episode and a history of dysthymia were associated with shorter telomeres. Other factors, such as symptom presentation, family history of depression, and other comorbid internalizing disorders, were not associated with these molecular markers. Chronicity of depressive symptoms is related to more pronounced telomere shortening and increased mtDNA copy number among individuals with a history of recurrent MDD. As these molecular markers have previously been implicated in physiological aging and morbidity, individuals who experience prolonged depressive symptoms are potentially at greater risk of adverse medical outcomes. © 2016 Wiley Periodicals, Inc.

  1. Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers

    OpenAIRE

    Govardhan, RN; Arakeri, JH

    2011-01-01

    Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...

  2. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  4. 105-DR Large sodium fire facility soil sampling data evaluation report

    International Nuclear Information System (INIS)

    Adler, J.G.

    1996-01-01

    This report evaluates the soil sampling activities, soil sample analysis, and soil sample data associated with the closure activities at the 105-DR Large Sodium Fire Facility. The evaluation compares these activities to the regulatory requirements for meeting clean closure. The report concludes that there is no soil contamination from the waste treatment activities

  5. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    Science.gov (United States)

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  6. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  7. Operability test report for rotary mode core sampling system number 3

    International Nuclear Information System (INIS)

    Corbett, J.E.

    1996-01-01

    This report documents the successful completion of operability testing for the Rotary Mode Core Sampling (RMCS) system number-sign 3. The Report includes the test procedure (WHC-SD-WM-OTP-174), exception resolutions, data sheets, and a test report summary

  8. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  9. Characterization of General TCP Traffic under a Large Number of Flows Regime

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M

    2002-01-01

    .... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...

  10. Associations between sociodemographic, sampling and health factors and various salivary cortisol indicators in a large sample without psychopathology

    NARCIS (Netherlands)

    Vreeburg, Sophie A.; Kruijtzer, Boudewijn P.; van Pelt, Johannes; van Dyck, Richard; DeRijk, Roel H.; Hoogendijk, Witte J. G.; Smit, Johannes H.; Zitman, Frans G.; Penninx, Brenda

    Background: Cortisol levels are increasingly often assessed in large-scale psychosomatic research. Although determinants of different salivary cortisol indicators have been described, they have not yet been systematically studied within the same study with a Large sample size. Sociodemographic,

  11. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  12. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  13. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  14. Modified large number theory with constant G

    International Nuclear Information System (INIS)

    Recami, E.

    1983-01-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic

  15. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    Science.gov (United States)

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  16. Breakdown of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  17. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  18. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  19. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  20. Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk

    International Nuclear Information System (INIS)

    Milinazzo, F.; Saffman, P.G.

    1977-01-01

    The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster

  1. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  2. An open-flow pulse ionization chamber for alpha spectrometry of large-area samples

    International Nuclear Information System (INIS)

    Johansson, L.; Roos, B.; Samuelsson, C.

    1992-01-01

    The presented open-flow pulse ionization chamber was developed to make alpha spectrometry on large-area surfaces easy. One side of the chamber is left open, where the sample is to be placed. The sample acts as a chamber wall and therby defeins the detector volume. The sample area can be as large as 400 cm 2 . To prevent air from entering the volume there is a constant gas flow through the detector, coming in at the bottom of the chamber and leaking at the sides of the sample. The method results in good energy resolution and has considerable applicability in the retrospective radon research. Alpha spectra obtained in the retrospective measurements descend from 210 Po, built up in the sample from the radon daughters recoiled into a glass surface. (au)

  3. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  4. The large number hypothesis and Einstein's theory of gravitation

    International Nuclear Information System (INIS)

    Yun-Kau Lau

    1985-01-01

    In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch

  5. Fast concentration of dissolved forms of cesium radioisotopes from large seawater samples

    International Nuclear Information System (INIS)

    Jan Kamenik; Henrieta Dulaiova; Ferdinand Sebesta; Kamila St'astna; Czech Technical University, Prague

    2013-01-01

    The method developed for cesium concentration from large freshwater samples was tested and adapted for analysis of cesium radionuclides in seawater. Concentration of dissolved forms of cesium in large seawater samples (about 100 L) was performed using composite absorbers AMP-PAN and KNiFC-PAN with ammonium molybdophosphate and potassium–nickel hexacyanoferrate(II) as active components, respectively, and polyacrylonitrile as a binding polymer. A specially designed chromatography column with bed volume (BV) 25 mL allowed fast flow rates of seawater (up to 1,200 BV h -1 ). The recovery yields were determined by ICP-MS analysis of stable cesium added to seawater sample. Both absorbers proved usability for cesium concentration from large seawater samples. KNiFC-PAN material was slightly more effective in cesium concentration from acidified seawater (recovery yield around 93 % for 700 BV h -1 ). This material showed similar efficiency in cesium concentration also from natural seawater. The activity concentrations of 137 Cs determined in seawater from the central Pacific Ocean were 1.5 ± 0.1 and 1.4 ± 0.1 Bq m -3 for an offshore (January 2012) and a coastal (February 2012) locality, respectively, 134 Cs activities were below detection limit ( -3 ). (author)

  6. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    Science.gov (United States)

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  7. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  8. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  9. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    Science.gov (United States)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  10. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  11. Law of Large Numbers: the Theory, Applications and Technology-based Education.

    Science.gov (United States)

    Dinov, Ivo D; Christou, Nicolas; Gould, Robert

    2009-03-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).

  12. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  13. Break down of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1995-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  14. Influence of sampling interval and number of projections on the quality of SR-XFMT reconstruction

    International Nuclear Information System (INIS)

    Deng Biao; Yu Xiaohan; Xu Hongjie

    2007-01-01

    Synchrotron Radiation based X-ray Fluorescent Microtomography (SR-XFMT) is a nondestructive technique for detecting elemental composition and distribution inside a specimen with high spatial resolution and sensitivity. In this paper, computer simulation of SR-XFMT experiment is performed. The influence of the sampling interval and the number of projections on the quality of SR-XFMT image reconstruction is analyzed. It is found that the sampling interval has greater effect on the quality of reconstruction than the number of projections. (authors)

  15. The lore of large numbers: some historical background to the anthropic principle

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1981-01-01

    A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)

  16. Conformal window in QCD for large numbers of colors and flavors

    International Nuclear Information System (INIS)

    Zhitnitsky, Ariel R.

    2014-01-01

    We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase

  17. Automated, feature-based image alignment for high-resolution imaging mass spectrometry of large biological samples

    NARCIS (Netherlands)

    Broersen, A.; Liere, van R.; Altelaar, A.F.M.; Heeren, R.M.A.; McDonnell, L.A.

    2008-01-01

    High-resolution imaging mass spectrometry of large biological samples is the goal of several research groups. In mosaic imaging, the most common method, the large sample is divided into a mosaic of small areas that are then analyzed with high resolution. Here we present an automated alignment

  18. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  19. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  20. Test sample handling apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    A test sample handling apparatus using automatic scintillation counting for gamma detection, for use in such fields as radioimmunoassay, is described. The apparatus automatically and continuously counts large numbers of samples rapidly and efficiently by the simultaneous counting of two samples. By means of sequential ordering of non-sequential counting data, it is possible to obtain precisely ordered data while utilizing sample carrier holders having a minimum length. (U.K.)

  1. Absolute activity determinations on large volume geological samples independent of self-absorption effects

    International Nuclear Information System (INIS)

    Wilson, W.E.

    1980-01-01

    This paper describes a method for measuring the absolute activity of large volume samples by γ-spectroscopy independent of self-absorption effects using Ge detectors. The method yields accurate matrix independent results at the expense of replicative counting of the unknown sample. (orig./HP)

  2. Procedure for plutonium analysis of large (100g) soil and sediment samples

    International Nuclear Information System (INIS)

    Meadows, J.W.T.; Schweiger, J.S.; Mendoza, B.; Stone, R.

    1975-01-01

    A method for the complete dissolution of large soil or sediment samples is described. This method is in routine usage at Lawrence Livermore Laboratory for the analysis of fall-out levels of Pu in soils and sediments. Intercomparison with partial dissolution (leach) techniques shows the complete dissolution method to be superior for the determination of plutonium in a wide variety of environmental samples. (author)

  3. Effects of the number of people on efficient capture and sample collection: A lion case study

    Directory of Open Access Journals (Sweden)

    Sam M. Ferreira

    2013-05-01

    Full Text Available Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.

  4. Effects of the number of people on efficient capture and sample collection: a lion case study.

    Science.gov (United States)

    Ferreira, Sam M; Maruping, Nkabeng T; Schoultz, Darius; Smit, Travis R

    2013-05-24

    Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.

  5. A modified large number theory with constant G

    Science.gov (United States)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  6. CopyNumber450kCancer: baseline correction for accurate copy number calling from the 450k methylation array.

    Science.gov (United States)

    Marzouka, Nour-Al-Dain; Nordlund, Jessica; Bäcklin, Christofer L; Lönnerholm, Gudmar; Syvänen, Ann-Christine; Carlsson Almlöf, Jonas

    2016-04-01

    The Illumina Infinium HumanMethylation450 BeadChip (450k) is widely used for the evaluation of DNA methylation levels in large-scale datasets, particularly in cancer. The 450k design allows copy number variant (CNV) calling using existing bioinformatics tools. However, in cancer samples, numerous large-scale aberrations cause shifting in the probe intensities and thereby may result in erroneous CNV calling. Therefore, a baseline correction process is needed. We suggest the maximum peak of probe segment density to correct the shift in the intensities in cancer samples. CopyNumber450kCancer is implemented as an R package. The package with examples can be downloaded at http://cran.r-project.org nour.marzouka@medsci.uu.se Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  7. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    Science.gov (United States)

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  8. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  9. 17 CFR Appendix B to Part 420 - Sample Large Position Report

    Science.gov (United States)

    2010-04-01

    ..., and as collateral for financial derivatives and other securities transactions $ Total Memorandum 1... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Sample Large Position Report B Appendix B to Part 420 Commodity and Securities Exchanges DEPARTMENT OF THE TREASURY REGULATIONS UNDER...

  10. Sampling of charged liquid radwaste stored in large tanks

    International Nuclear Information System (INIS)

    Tchemitcheff, E.; Domage, M.; Bernard-Bruls, X.

    1995-01-01

    The final safe disposal of radwaste, in France and elsewhere, entails, for liquid effluents, their conversion to a stable solid form, hence implying their conditioning. The production of conditioned waste with the requisite quality, traceability of the characteristics of the packages produced, and safe operation of the conditioning processes, implies at least the accurate knowledge of the chemical and radiochemical properties of the effluents concerned. The problem in sampling the normally charged effluents is aggravated for effluents that have been stored for several years in very large tanks, without stirring and retrieval systems. In 1992, SGN was asked by Cogema to study the retrieval and conditioning of LL/ML chemical sludge and spent ion-exchange resins produced in the operation of the UP2 400 plant at La Hague, and stored temporarily in rectangular silos and tanks. The sampling aspect was crucial for validating the inventories, identifying the problems liable to arise in the aging of the effluents, dimensioning the retrieval systems and checking the transferability and compatibility with the downstream conditioning process. Two innovative self-contained systems were developed and built for sampling operations, positioned above the tanks concerned. Both systems have been operated in active conditions and have proved totally satisfactory for taking representative samples. Today SGN can propose industrially proven overall solutions, adaptable to the various constraints of many spent fuel cycle operators

  11. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  12. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-21

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer

  13. Development of Large Sample Neutron Activation Technique for New Applications in Thailand

    International Nuclear Information System (INIS)

    Laoharojanaphand, S.; Tippayakul, C.; Wonglee, S.; Channuie, J.

    2018-01-01

    The development of the Large Sample Neutron Activation Analysis (LSNAA) in Thailand is presented in this paper. The technique had been firstly developed with rice sample as the test subject. The Thai Research Reactor-1/Modification 1 (TRR-1/M1) was used as the neutron source. The first step was to select and characterize an appropriate irradiation facility for the research. An out-core irradiation facility (A4 position) was first attempted. The results performed with the A4 facility were then used as guides for the subsequent experiments with the thermal column facility. The characterization of the thermal column was performed with Cu-wire to determine spatial distribution without and with rice sample. The flux depression without rice sample was observed to be less than 30% while the flux depression with rice sample increased to within 60%. The flux monitors internal to the rice sample were used to determine average flux over the rice sample. The gamma selfshielding effect during gamma measurement was corrected using the Monte Carlo simulation. The ratio between the efficiencies of the volume source and the point source for each energy point was calculated by the MCNPX code. The research team adopted the k0-NAA methodology to calculate the element concentration in the research. The k0-NAA program which developed by IAEA was set up to simulate the conditions of the irradiation and measurement facilities used in this research. The element concentrations in the bulk rice sample were then calculated taking into account the flux depression and gamma efficiency corrections. At the moment, the results still show large discrepancies with the reference values. However, more research on the validation will be performed to identify sources of errors. Moreover, this LS-NAA technique was introduced for the activation analysis of the IAEA archaeological mock-up. The results are provided in this report. (author)

  14. The problem of large samples. An activation analysis study of electronic waste material

    International Nuclear Information System (INIS)

    Segebade, C.; Goerner, W.; Bode, P.

    2007-01-01

    Large-volume instrumental photon activation analysis (IPAA) was used for the investigation of shredded electronic waste material. Sample masses from 1 to 150 grams were analyzed to obtain an estimate of the minimum sample size to be taken to achieve a representativeness of the results which is satisfactory for a defined investigation task. Furthermore, the influence of irradiation and measurement parameters upon the quality of the analytical results were studied. Finally, the analytical data obtained from IPAA and instrumental neutron activation analysis (INAA), both carried out in a large-volume mode, were compared. Only parts of the values were found in satisfactory agreement. (author)

  15. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  16. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    Science.gov (United States)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.

  17. Large sample neutron activation analysis: establishment at CDTN/CNEN, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Maria Angela de B.C., E-mail: menezes@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Jacimovic, Radojko, E-mail: radojko.jacimovic@ijs.s [Jozef Stefan Institute, Ljubljana (Slovenia). Dept. of Environmental Sciences. Group for Radiochemistry and Radioecology

    2011-07-01

    In order to improve the application of the neutron activation technique at CDTN/CNEN, the large sample instrumental neutron activation analysis is being established, IAEA BRA 14798 and FAPEMIG APQ-01259-09 projects. This procedure, LS-INAA, usually requires special facilities for the activation as well as for the detection. However, the TRIGA Mark I IPR R1, CDTN/CNEN has not been adapted for the irradiation and the usual gamma spectrometry has being carried out. To start the establishment of the LS-INAA, a 5g sample - IAEA/Soil 7 reference material was analyzed by k{sub 0}-standardized method. This paper is about the detector efficiency over the volume source using KayWin v2.23 and ANGLE V3.0 software. (author)

  18. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    Science.gov (United States)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  19. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  20. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    International Nuclear Information System (INIS)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min

    2015-01-01

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments

  1. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments.

  2. An individual urinary proteome analysis in normal human beings to define the minimal sample number to represent the normal urinary proteome

    Directory of Open Access Journals (Sweden)

    Liu Xuejiao

    2012-11-01

    Full Text Available Abstract Background The urinary proteome has been widely used for biomarker discovery. A urinary proteome database from normal humans can provide a background for discovery proteomics and candidate proteins/peptides for targeted proteomics. Therefore, it is necessary to define the minimum number of individuals required for sampling to represent the normal urinary proteome. Methods In this study, inter-individual and inter-gender variations of urinary proteome were taken into consideration to achieve a representative database. An individual analysis was performed on overnight urine samples from 20 normal volunteers (10 males and 10 females by 1DLC/MS/MS. To obtain a representative result of each sample, a replicate 1DLCMS/MS analysis was performed. The minimal sample number was estimated by statistical analysis. Results For qualitative analysis, less than 5% of new proteins/peptides were identified in a male/female normal group by adding a new sample when the sample number exceeded nine. In addition, in a normal group, the percentage of newly identified proteins/peptides was less than 5% upon adding a new sample when the sample number reached 10. Furthermore, a statistical analysis indicated that urinary proteomes from normal males and females showed different patterns. For quantitative analysis, the variation of protein abundance was defined by spectrum count and western blotting methods. And then the minimal sample number for quantitative proteomic analysis was identified. Conclusions For qualitative analysis, when considering the inter-individual and inter-gender variations, the minimum sample number is 10 and requires a balanced number of males and females in order to obtain a representative normal human urinary proteome. For quantitative analysis, the minimal sample number is much greater than that for qualitative analysis and depends on the experimental methods used for quantification.

  3. Relationship between accuracy and number of samples on statistical quantity and contour map of environmental gamma-ray dose rate. Example of random sampling

    International Nuclear Information System (INIS)

    Matsuda, Hideharu; Minato, Susumu

    2002-01-01

    The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)

  4. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  5. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  6. Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis

    International Nuclear Information System (INIS)

    Qadir, A.; Mufti, A.A.

    1980-07-01

    Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)

  7. Evaluation of environmental sampling methods for detection of Salmonella enterica in a large animal veterinary hospital.

    Science.gov (United States)

    Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey

    2018-04-01

    Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.

  8. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  9. The large numbers hypothesis and the Einstein theory of gravitation

    International Nuclear Information System (INIS)

    Dirac, P.A.M.

    1979-01-01

    A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)

  10. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  11. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  12. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  13. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  14. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    Science.gov (United States)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  15. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  16. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  17. Elemental mapping of large samples by external ion beam analysis with sub-millimeter resolution and its applications

    Science.gov (United States)

    Silva, T. F.; Rodrigues, C. L.; Added, N.; Rizzutto, M. A.; Tabacniks, M. H.; Mangiarotti, A.; Curado, J. F.; Aguirre, F. R.; Aguero, N. F.; Allegro, P. R. P.; Campos, P. H. O. V.; Restrepo, J. M.; Trindade, G. F.; Antonio, M. R.; Assis, R. F.; Leite, A. R.

    2018-05-01

    The elemental mapping of large areas using ion beam techniques is a desired capability for several scientific communities, involved on topics ranging from geoscience to cultural heritage. Usually, the constraints for large-area mapping are not met in setups employing micro- and nano-probes implemented all over the world. A novel setup for mapping large sized samples in an external beam was recently built at the University of São Paulo employing a broad MeV-proton probe with sub-millimeter dimension, coupled to a high-precision large range XYZ robotic stage (60 cm range in all axis and precision of 5 μ m ensured by optical sensors). An important issue on large area mapping is how to deal with the irregularities of the sample's surface, that may introduce artifacts in the images due to the variation of the measuring conditions. In our setup, we implemented an automatic system based on machine vision to correct the position of the sample to compensate for its surface irregularities. As an additional benefit, a 3D digital reconstruction of the scanned surface can also be obtained. Using this new and unique setup, we have produced large-area elemental maps of ceramics, stones, fossils, and other sort of samples.

  18. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  19. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  20. Rapid separation method for {sup 237}Np and Pu isotopes in large soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod L., E-mail: sherrod.maxwell@srs.go [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States); Culligan, Brian K.; Noyes, Gary W. [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States)

    2011-07-15

    A new rapid method for the determination of {sup 237}Np and Pu isotopes in soil and sediment samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used for large soil samples. The new soil method utilizes an acid leaching method, iron/titanium hydroxide precipitation, a lanthanum fluoride soil matrix removal step, and a rapid column separation process with TEVA Resin. The large soil matrix is removed easily and rapidly using these two simple precipitations with high chemical recoveries and effective removal of interferences. Vacuum box technology and rapid flow rates are used to reduce analytical time.

  1. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  2. Particle creation and Dirac's large number hypothesis; and Reply

    International Nuclear Information System (INIS)

    Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.

    1976-01-01

    The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)

  3. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  4. Law of large numbers and central limit theorem for randomly forced PDE's

    CERN Document Server

    Shirikyan, A

    2004-01-01

    We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.

  5. A model for estimating the minimum number of offspring to sample in studies of reproductive success.

    Science.gov (United States)

    Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M

    2011-01-01

    Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.

  6. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  7. Study of a large rapid ashing apparatus and a rapid dry ashing method for biological samples and its application

    International Nuclear Information System (INIS)

    Jin Meisun; Wang Benli; Liu Wencang

    1988-04-01

    A large rapid-dry-ashing apparatus and a rapid ashing method for biological samples are described. The apparatus consists of specially made ashing furnace, gas supply system and temperature-programming control cabinet. The following adventages have been showed by ashing experiment with the above apparatus: (1) high speed of ashing and saving of electric energy; (2) The apparatus can ash a large amount of samples at a time; (3) The ashed sample is pure white (or spotless), loose and easily soluble with few content of residual char; (4) The fresh sample can also be ashed directly. The apparatus is suitable for ashing a large amount of the environmental samples containing low level radioactivity trace elements and the medical, food and agricultural research samples

  8. Non-uniform sampling and wide range angular spectrum method

    International Nuclear Information System (INIS)

    Kim, Yong-Hae; Byun, Chun-Won; Oh, Himchan; Lee, JaeWon; Pi, Jae-Eun; Heon Kim, Gi; Lee, Myung-Lae; Ryu, Hojun; Chu, Hye-Yong; Hwang, Chi-Sun

    2014-01-01

    A novel method is proposed for simulating free space field propagation from a source plane to a destination plane that is applicable for both small and large propagation distances. The angular spectrum method (ASM) was widely used for simulating near field propagation, but it caused a numerical error when the propagation distance was large because of aliasing due to under sampling. Band limited ASM satisfied the Nyquist condition on sampling by limiting a bandwidth of a propagation field to avoid an aliasing error so that it could extend the applicable propagation distance of the ASM. However, the band limited ASM also made an error due to the decrease of an effective sampling number in a Fourier space when the propagation distance was large. In the proposed wide range ASM, we use a non-uniform sampling in a Fourier space to keep a constant effective sampling number even though the propagation distance is large. As a result, the wide range ASM can produce simulation results with high accuracy for both far and near field propagation. For non-paraxial wave propagation, we applied the wide range ASM to a shifted destination plane as well. (paper)

  9. On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; Makowski, Armand M

    2005-01-01

    .... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...

  10. Development of digital gamma-activation autoradiography for analysis of samples of large area

    International Nuclear Information System (INIS)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I.

    2011-01-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  11. Development of digital gamma-activation autoradiography for analysis of samples of large area

    Energy Technology Data Exchange (ETDEWEB)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I. [Russian Academy of Sciences, Moscow (Russian Federation). Vernadsky Inst. of Geochemistry and Analytical Chemistry

    2011-07-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  12. The Effect of Sample Size and Data Numbering on Precision of Calibration Model to predict Soil Properties

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2017-10-01

    Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of

  13. Relationship of fish indices with sampling effort and land use change in a large Mediterranean river.

    Science.gov (United States)

    Almeida, David; Alcaraz-Hernández, Juan Diego; Merciai, Roberto; Benejam, Lluís; García-Berthou, Emili

    2017-12-15

    Fish are invaluable ecological indicators in freshwater ecosystems but have been less used for ecological assessments in large Mediterranean rivers. We evaluated the effects of sampling effort (transect length) on fish metrics, such as species richness and two fish indices (the new European Fish Index EFI+ and a regional index, IBICAT2b), in the mainstem of a large Mediterranean river. For this purpose, we sampled by boat electrofishing five sites each with 10 consecutive transects corresponding to a total length of 20 times the river width (European standard required by the Water Framework Directive) and we also analysed the effect of sampling area on previous surveys. Species accumulation curves and richness extrapolation estimates in general suggested that species richness was reasonably estimated with transect lengths of 10 times the river width or less. The EFI+ index was significantly affected by sampling area, both for our samplings and previous data. Surprisingly, EFI+ values in general decreased with increasing sampling area, despite the higher observed richness, likely because the expected values of metrics were higher. By contrast, the regional fish index was not dependent on sampling area, likely because it does not use a predictive model. Both fish indices, but particularly the EFI+, decreased with less forest cover percentage, even within the smaller disturbance gradient in the river type studied (mainstem of a large Mediterranean river, where environmental pressures are more general). Although the two fish-based indices are very different in terms of their development, methodology, and metrics used, they were significantly correlated and provided a similar assessment of ecological status. Our results reinforce the importance of standardization of sampling methods for bioassessment and suggest that predictive models that use sampling area as a predictor might be more affected by differences in sampling effort than simpler biotic indices. Copyright

  14. Spatio-temporal foreshock activity during stick-slip experiments of large rock samples

    Science.gov (United States)

    Tsujimura, Y.; Kawakata, H.; Fukuyama, E.; Yamashita, F.; Xu, S.; Mizoguchi, K.; Takizawa, S.; Hirano, S.

    2016-12-01

    Foreshock activity has sometimes been reported for large earthquakes, and has been roughly classified into the following two classes. For shallow intraplate earthquakes, foreshocks occurred in the vicinity of the mainshock hypocenter (e.g., Doi and Kawakata, 2012; 2013). And for intraplate subduction earthquakes, foreshock hypocenters migrated toward the mainshock hypocenter (Kato, et al., 2012; Yagi et al., 2014). To understand how foreshocks occur, it is useful to investigate the spatio-temporal activities of foreshocks in the laboratory experiments under controlled conditions. We have conducted stick-slip experiments by using a large-scale biaxial friction apparatus at NIED in Japan (e.g., Fukuyama et al., 2014). Our previous results showed that stick-slip events repeatedly occurred in a run, but only those later events were preceded by foreshocks. Kawakata et al. (2014) inferred that the gouge generated during the run was an important key for foreshock occurrence. In this study, we proceeded to carry out stick-slip experiments of large rock samples whose interface (fault plane) is 1.5 meter long and 0.5 meter wide. After some runs to generate fault gouge between the interface. In the current experiments, we investigated spatio-temporal activities of foreshocks. We detected foreshocks from waveform records of 3D array of piezo-electric sensors. Our new results showed that more than three foreshocks (typically about twenty) had occurred during each stick-slip event, in contrast to the few foreshocks observed during previous experiments without pre-existing gouge. Next, we estimated the hypocenter locations of the stick-slip events, and found that they were located near the opposite end to the loading point. In addition, we observed a migration of foreshock hypocenters toward the hypocenter of each stick-slip event. This suggests that the foreshock activity observed in our current experiments was similar to that for the interplate earthquakes in terms of the

  15. Utilizing the International GeoSample Number Concept during ICDP Expedition COSC

    Science.gov (United States)

    Conze, Ronald; Lorenz, Henning; Ulbricht, Damian; Gorgas, Thomas; Elger, Kirsten

    2016-04-01

    The concept of the International GeoSample Number (IGSN) was introduced to uniquely identify and register geo-related sample material, and make it retrievable via electronic media (e.g., SESAR - http://www.geosamples.org/igsnabout). The general aim of the IGSN concept is to improve accessing stored sample material worldwide, enable the exact identification, its origin and provenance, and also the exact and complete citation of acquired samples throughout the literature. The ICDP expedition COSC (Collisional Orogeny in the Scandinavian Caledonides, http://cosc.icdp-online.org) prompted for the first time in ICDP's history to assign and register IGSNs during an ongoing drilling campaign. ICDP drilling expeditions are using commonly the Drilling Information System DIS (http://doi.org/10.2204/iodp.sd.4.07.2007) for the inventory of recovered sample material. During COSC IGSNs were assigned to every drill hole, core run, core section, and sample taken from core material. The original IGSN specification has been extended to achieve the required uniqueness of IGSNs with our offline-procedure. The ICDP name space indicator and the Expedition ID (5054) are forming an extended prefix (ICDP5054). For every type of sample material, an encoded sequence of characters follows. This sequence is derived from the DIS naming convention which is unique from the beginning. Thereby every ICDP expedition has an unlimited name space for IGSN assignments. This direct derivation of IGSNs from the DIS database context ensures the distinct parent-child hierarchy of the IGSNs among each other. In the case of COSC this method of inventory-keeping of all drill cores was done routinely using the ExpeditionDIS during field work and subsequent sampling party. After completing the field campaign, all sample material was transferred to the "Nationales Bohrkernlager" in Berlin-Spandau, Germany. Corresponding data was subsequently imported into the CurationDIS used at the aforementioned core storage

  16. Feasibility studies on large sample neutron activation analysis using a low power research reactor

    International Nuclear Information System (INIS)

    Gyampo, O.

    2008-06-01

    Instrumental neutron activation analysis (INAA) using Ghana Research Reactor-1 (GHARR-1) can be directly applied to samples with masses in grams. Samples weights were in the range of 0.5g to 5g. Therefore, the representativity of the sample is improved as well as sensitivity. Irradiation of samples was done using a low power research reactor. The correction for the neutron self-shielding within the sample is determined from measurement of the neutron flux depression just outside the sample. Correction for gamma ray self-attenuation in the sample was performed via linear attenuation coefficients derived from transmission measurements. Quantitative and qualitative analysis of data were done using gamma ray spectrometry (HPGe detector). The results of this study on the possibilities of large sample NAA using a miniature neutron source reactor (MNSR) show clearly that the Ghana Research Reactor-1 (GHARR-1) at the National Nuclear Research Institute (NNRI) can be used for sample analyses up to 5 grams (5g) using the pneumatic transfer systems.

  17. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  18. Fluid sample collection and distribution system. [qualitative analysis of aqueous samples from several points

    Science.gov (United States)

    Brooks, R. L. (Inventor)

    1979-01-01

    A multipoint fluid sample collection and distribution system is provided wherein the sample inputs are made through one or more of a number of sampling valves to a progressive cavity pump which is not susceptible to damage by large unfiltered particles. The pump output is through a filter unit that can provide a filtered multipoint sample. An unfiltered multipoint sample is also provided. An effluent sample can be taken and applied to a second progressive cavity pump for pumping to a filter unit that can provide one or more filtered effluent samples. The second pump can also provide an unfiltered effluent sample. Means are provided to periodically back flush each filter unit without shutting off the whole system.

  19. Determination of 129I in large soil samples after alkaline wet disintegration

    International Nuclear Information System (INIS)

    Bunzl, K.; Kracke, W.

    1992-01-01

    Large soil samples (up to 500 g) can conveniently be disintegrated by hydrogen peroxide in an utility tank under alkaline conditions to determine subsequently 129 I by neutron activation analysis. Interfering elements such as Br are removed already before neutron irradiation to reduce the radiation exposure of the personnel. The precision of the method is 129 I also by the combustion method. (orig.)

  20. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...

  1. Estimating the numbers of malaria infections in blood samples using high-resolution genotyping data.

    Directory of Open Access Journals (Sweden)

    Amanda Ross

    Full Text Available People living in endemic areas often habour several malaria infections at once. High-resolution genotyping can distinguish between infections by detecting the presence of different alleles at a polymorphic locus. However the number of infections may not be accurately counted since parasites from multiple infections may carry the same allele. We use simulation to determine the circumstances under which the number of observed genotypes are likely to be substantially less than the number of infections present and investigate the performance of two methods for estimating the numbers of infections from high-resolution genotyping data. The simulations suggest that the problem is not substantial in most datasets: the disparity between the mean numbers of infections and of observed genotypes was small when there was 20 or more alleles, 20 or more blood samples, a mean number of infections of 6 or less and where the frequency of the most common allele was no greater than 20%. The issue of multiple infections carrying the same allele is unlikely to be a major component of the errors in PCR-based genotyping. Simulations also showed that, with heterogeneity in allele frequencies, the observed frequencies are not a good approximation of the true allele frequencies. The first method that we proposed to estimate the numbers of infections assumes that they are a good approximation and hence did poorly in the presence of heterogeneity. In contrast, the second method by Li et al estimates both the numbers of infections and the true allele frequencies simultaneously and produced accurate estimates of the mean number of infections.

  2. Determinants of salivary evening alpha-amylase in a large sample free of psychopathology

    NARCIS (Netherlands)

    Veen, Gerthe; Giltay, Erik J.; Vreeburg, Sophie A.; Licht, Carmilla M. M.; Cobbaert, Christa M.; Zitman, Frans G.; Penninx, Brenda W. J. H.

    Objective: Recently, salivary alpha-amylase (sAA) has been proposed as a suitable index for sympathetic activity and dysregulation of the autonomic nervous system (ANS). Although determinants of sAA have been described, they have not been studied within the same study with a large sample size

  3. Water pollution screening by large-volume injection of aqueous samples and application to GC/MS analysis of a river Elbe sample

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, S.; Efer, J.; Engewald, W. [Leipzig Univ. (Germany). Inst. fuer Analytische Chemie

    1997-03-01

    The large-volume sampling of aqueous samples in a programmed temperature vaporizer (PTV) injector was used successfully for the target and non-target analysis of real samples. In this still rarely applied method, e.g., 1 mL of the water sample to be analyzed is slowly injected direct into the PTV. The vaporized water is eliminated through the split vent. The analytes are concentrated onto an adsorbent inside the insert and subsequently thermally desorbed. The capability of the method is demonstrated using a sample from the river Elbe. By means of coupling this method with a mass selective detector in SIM mode (target analysis) the method allows the determination of pollutants in the concentration range up to 0.01 {mu}g/L. Furthermore, PTV enrichment is an effective and time-saving method for non-target analysis in SCAN mode. In a sample from the river Elbe over 20 compounds were identified. (orig.) With 3 figs., 2 tabs.

  4. A hard-to-read font reduces the framing effect in a large sample.

    Science.gov (United States)

    Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik

    2018-04-01

    How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.

  5. A non-destructive technique for assigning effective atomic number to scientific samples by scattering of 59.54 keV gamma photons

    International Nuclear Information System (INIS)

    Singh, M.P.; Sharma, Amandeep; Singh, Bhajan; Sandhu, B.S.

    2010-01-01

    The objective of present experiment, employing a scattering of 59.54 keV gamma photons, is to assign effective atomic number (Z eff ) to scientific samples (rare earths) of known composition. An HPGe semiconductor detector, placed at 90 o to the incident beam, detects gamma photons scattered from the sample under investigation. The experiment is performed on various elements with atomic number satisfying, 6≤Z≤82, for 59.54 keV incident photons. The intensity ratio of Rayleigh to Compton scattered peaks, corrected for photo-peak efficiency of gamma detector and absorption of photons in the sample and air, is plotted as a function of atomic number and constituted a best fit-curve. From this fit-curve, the respective effective atomic numbers to samples of rare earths are determined. The agreement of measured values of Z eff with theoretical calculations is quite satisfactory.

  6. A novel SNP analysis method to detect copy number alterations with an unbiased reference signal directly from tumor samples

    Directory of Open Access Journals (Sweden)

    LaFramboise William A

    2011-01-01

    Full Text Available Abstract Background Genomic instability in cancer leads to abnormal genome copy number alterations (CNA as a mechanism underlying tumorigenesis. Using microarrays and other technologies, tumor CNA are detected by comparing tumor sample CN to normal reference sample CN. While advances in microarray technology have improved detection of copy number alterations, the increase in the number of measured signals, noise from array probes, variations in signal-to-noise ratio across batches and disparity across laboratories leads to significant limitations for the accurate identification of CNA regions when comparing tumor and normal samples. Methods To address these limitations, we designed a novel "Virtual Normal" algorithm (VN, which allowed for construction of an unbiased reference signal directly from test samples within an experiment using any publicly available normal reference set as a baseline thus eliminating the need for an in-lab normal reference set. Results The algorithm was tested using an optimal, paired tumor/normal data set as well as previously uncharacterized pediatric malignant gliomas for which a normal reference set was not available. Using Affymetrix 250K Sty microarrays, we demonstrated improved signal-to-noise ratio and detected significant copy number alterations using the VN algorithm that were validated by independent PCR analysis of the target CNA regions. Conclusions We developed and validated an algorithm to provide a virtual normal reference signal directly from tumor samples and minimize noise in the derivation of the raw CN signal. The algorithm reduces the variability of assays performed across different reagent and array batches, methods of sample preservation, multiple personnel, and among different laboratories. This approach may be valuable when matched normal samples are unavailable or the paired normal specimens have been subjected to variations in methods of preservation.

  7. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  8. Determining the number of samples required for decisions concerning remedial actions at hazardous waste sites

    International Nuclear Information System (INIS)

    Skiles, J.L.; Redfearn, A.; White, R.K.

    1991-01-01

    An important consideration for every risk analyst is how many field samples should be taken so that scientifically defensible decisions concerning the need for remediation of a hazardous waste site can be made. Since any plausible remedial action alternative must, at a minimum, satisfy the condition of protectiveness of human and environmental health, we propose a risk-based approach for determining the number of samples to take during remedial investigations rather than using more traditional approaches such as considering background levels of contamination or federal or state cleanup standards

  9. Association between genetic variation in a region on chromosome 11 and schizophrenia in large samples from Europe

    DEFF Research Database (Denmark)

    Rietschel, M; Mattheisen, M; Degenhardt, F

    2012-01-01

    the recruitment of very large samples of patients and controls (that is tens of thousands), or large, potentially more homogeneous samples that have been recruited from confined geographical areas using identical diagnostic criteria. Applying the latter strategy, we performed a genome-wide association study (GWAS...... between emotion regulation and cognition that is structurally and functionally abnormal in SCZ and bipolar disorder.Molecular Psychiatry advance online publication, 12 July 2011; doi:10.1038/mp.2011.80....

  10. Thinking about dying and trying and intending to die: results on suicidal behavior from a large Web-based sample.

    Science.gov (United States)

    de Araújo, Rafael M F; Mazzochi, Leonardo; Lara, Diogo R; Ottoni, Gustavo L

    2015-03-01

    Suicide is an important worldwide public health problem. The aim of this study was to characterize risk factors of suicidal behavior using a large Web-based sample. The data were collected by the Brazilian Internet Study on Temperament and Psychopathology (BRAINSTEP) from November 2010 to July 2011. Suicidal behavior was assessed by an instrument based on the Suicidal Behaviors Questionnaire. The final sample consisted of 48,569 volunteers (25.9% men) with a mean ± SD age of 30.7 ± 10.1 years. More than 60% of the sample reported having had at least a passing thought of killing themselves, and 6.8% of subjects had previously attempted suicide (64% unplanned). The demographic features with the highest risk of attempting suicide were female gender (OR = 1.82, 95% CI = 1.65 to 2.00); elementary school as highest education level completed (OR = 2.84, 95% CI = 2.48 to 3.25); being unable to work (OR = 5.32, 95% CI = 4.15 to 6.81); having no religion (OR = 2.08, 95% CI = 1.90 to 2.29); and, only for female participants, being married (OR = 1.19, 95% CI = 1.08 to 1.32) or divorced (OR = 1.66, 95% CI = 1.41 to 1.96). A family history of a suicide attempt and of a completed suicide showed the same increment in the risk of suicidal behavior. The higher the number of suicide attempts, the higher was the real intention to die (P < .05). Those who really wanted to die reported more emptiness/loneliness (OR = 1.58, 95% CI = 1.35 to 1.85), disconnection (OR = 1.54, 95% CI = 1.30 to 1.81), and hopelessness (OR = 1.74, 95% CI = 1.49 to 2.03), but their methods were not different from the methods of those who did not mean to die. This large Web survey confirmed results from previous studies on suicidal behavior and pointed out the relevance of the number of previous suicide attempts and of a positive family history, even for a noncompleted suicide, as important risk factors. © Copyright 2015 Physicians Postgraduate Press, Inc.

  11. Development and validation of InnoQuant™, a sensitive human DNA quantitation and degradation assessment method for forensic samples using high copy number mobile elements Alu and SVA.

    Science.gov (United States)

    Pineda, Gina M; Montgomery, Anne H; Thompson, Robyn; Indest, Brooke; Carroll, Marion; Sinha, Sudhir K

    2014-11-01

    There is a constant need in forensic casework laboratories for an improved way to increase the first-pass success rate of forensic samples. The recent advances in mini STR analysis, SNP, and Alu marker systems have now made it possible to analyze highly compromised samples, yet few tools are available that can simultaneously provide an assessment of quantity, inhibition, and degradation in a sample prior to genotyping. Currently there are several different approaches used for fluorescence-based quantification assays which provide a measure of quantity and inhibition. However, a system which can also assess the extent of degradation in a forensic sample will be a useful tool for DNA analysts. Possessing this information prior to genotyping will allow an analyst to more informatively make downstream decisions for the successful typing of a forensic sample without unnecessarily consuming DNA extract. Real-time PCR provides a reliable method for determining the amount and quality of amplifiable DNA in a biological sample. Alu are Short Interspersed Elements (SINE), approximately 300bp insertions which are distributed throughout the human genome in large copy number. The use of an internal primer to amplify a segment of an Alu element allows for human specificity as well as high sensitivity when compared to a single copy target. The advantage of an Alu system is the presence of a large number (>1000) of fixed insertions in every human genome, which minimizes the individual specific variation possible when using a multi-copy target quantification system. This study utilizes two independent retrotransposon genomic targets to obtain quantification of an 80bp "short" DNA fragment and a 207bp "long" DNA fragment in a degraded DNA sample in the multiplex system InnoQuant™. The ratio of the two quantitation values provides a "Degradation Index", or a qualitative measure of a sample's extent of degradation. The Degradation Index was found to be predictive of the observed loss

  12. teaching multiplication of large positive whole numbers using ...

    African Journals Online (AJOL)

    KEY WORDS: Grating Method, History of Mathematics, Long Multiplication. ... The Wolfram mathworld (n.d.) opined that the ... A further simple random sampling was carried out to select an intact class of 40 students from each of the sampled ...

  13. Analysis of submicrogram samples by INAA

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, D J [National Aeronautics and Space Administration, Houston, TX (USA). Lyndon B. Johnson Space Center

    1990-12-20

    Procedure have been developed to increase the sensitivity of instrumental neutron activation analysis (INAA) so that cosmic-dust samples weighing only 10{sup -9}-10{sup -7} g are routinely analyzed for a sizable number of elements. The primary differences from standard techniques are: (1) irradiation of the samples is much more intense, (2) gamma ray assay of the samples is done using long counting times and large Ge detectors that are operated in an excellent low-background facility, (3) specially prepared glass standards are used, (4) samples are too small to be weighed routinely and concentrations must be obtained indirectly, (5) sample handling is much more difficult, and contamination of small samples with normally insignificant amounts of contaminants is difficult to prevent. In spite of the difficulties, INAA analyses have been done on 15 cosmic-dust particles and a large number of other stratospheric particles. Two-sigma detection limits for some elements are in the range of femtograms (10{sup -15} g), e.g. Co=11, Sc=0.9, Sm=0.2 A particle weighing just 0.2 ng was analyzed, obtaining abundances with relative analytical uncertainties of less than 10% for four elements (Fe, Co, Ni and Sc), which were sufficient to allow identification of the particle as chondritic interplanetary dust. Larger samples allow abundances of twenty or more elements to be obtained. (orig.).

  14. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  15. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  16. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  17. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    Energy Technology Data Exchange (ETDEWEB)

    Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.

  18. A novel storage system for cryoEM samples.

    Science.gov (United States)

    Scapin, Giovanna; Prosise, Winifred W; Wismer, Michael K; Strickland, Corey

    2017-07-01

    We present here a new CryoEM grid boxes storage system designed to simplify sample labeling, tracking and retrieval. The system is based on the crystal pucks widely used by the X-ray crystallographic community for storage and shipping of crystals. This system is suitable for any cryoEM laboratory, but especially for large facilities that will need accurate tracking of large numbers of samples coming from different sources. Copyright © 2017. Published by Elsevier Inc.

  19. Recreating Raven's: software for systematically generating large numbers of Raven-like matrix problems with normed properties.

    Science.gov (United States)

    Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E

    2010-05-01

    Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.

  20. Psychometric Evaluation of the Thought–Action Fusion Scale in a Large Clinical Sample

    Science.gov (United States)

    Meyer, Joseph F.; Brown, Timothy A.

    2015-01-01

    This study examined the psychometric properties of the 19-item Thought–Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct. PMID:22315482

  1. Psychometric evaluation of the thought-action fusion scale in a large clinical sample.

    Science.gov (United States)

    Meyer, Joseph F; Brown, Timothy A

    2013-12-01

    This study examined the psychometric properties of the 19-item Thought-Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct.

  2. Post-traumatic stress syndrome in a large sample of older adults: determinants and quality of life.

    Science.gov (United States)

    Lamoureux-Lamarche, Catherine; Vasiliadis, Helen-Maria; Préville, Michel; Berbiche, Djamal

    2016-01-01

    The aims of this study are to assess in a sample of older adults consulting in primary care practices the determinants and quality of life associated with post-traumatic stress syndrome (PTSS). Data used came from a large sample of 1765 community-dwelling older adults who were waiting to receive health services in primary care clinics in the province of Quebec. PTSS was measured with the PTSS scale. Socio-demographic and clinical characteristics were used as potential determinants of PTSS. Quality of life was measured with the EuroQol-5D-3L (EQ-5D-3L) EQ-Visual Analog Scale and the Satisfaction With Your Life Scale. Multivariate logistic and linear regression models were used to study the presence of PTSS and different measures of health-related quality of life and quality of life as a function of study variables. The six-month prevalence of PTSS was 11.0%. PTSS was associated with age, marital status, number of chronic disorders and the presence of an anxiety disorder. PTSS was also associated with the EQ-5D-3L and the Satisfaction with Your Life Scale. PTSS is prevalent in patients consulting in primary care practices. Primary care physicians should be aware that PTSS is also associated with a decrease in quality of life, which can further negatively impact health status.

  3. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  4. New feature for an old large number

    International Nuclear Information System (INIS)

    Novello, M.; Oliveira, L.R.A.

    1986-01-01

    A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt

  5. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  6. Proteomic Biomarker Discovery in 1000 Human Plasma Samples with Mass Spectrometry.

    Science.gov (United States)

    Cominetti, Ornella; Núñez Galindo, Antonio; Corthésy, John; Oller Moreno, Sergio; Irincheeva, Irina; Valsesia, Armand; Astrup, Arne; Saris, Wim H M; Hager, Jörg; Kussmann, Martin; Dayon, Loïc

    2016-02-05

    The overall impact of proteomics on clinical research and its translation has lagged behind expectations. One recognized caveat is the limited size (subject numbers) of (pre)clinical studies performed at the discovery stage, the findings of which fail to be replicated in larger verification/validation trials. Compromised study designs and insufficient statistical power are consequences of the to-date still limited capacity of mass spectrometry (MS)-based workflows to handle large numbers of samples in a realistic time frame, while delivering comprehensive proteome coverages. We developed a highly automated proteomic biomarker discovery workflow. Herein, we have applied this approach to analyze 1000 plasma samples from the multicentered human dietary intervention study "DiOGenes". Study design, sample randomization, tracking, and logistics were the foundations of our large-scale study. We checked the quality of the MS data and provided descriptive statistics. The data set was interrogated for proteins with most stable expression levels in that set of plasma samples. We evaluated standard clinical variables that typically impact forthcoming results and assessed body mass index-associated and gender-specific proteins at two time points. We demonstrate that analyzing a large number of human plasma samples for biomarker discovery with MS using isobaric tagging is feasible, providing robust and consistent biological results.

  7. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    Science.gov (United States)

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Does developmental timing of exposure to child maltreatment predict memory performance in adulthood? Results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Busso, Daniel S; Raffeld, Miriam R; Smoller, Jordan W; Nelson, Charles A; Doyle, Alysa E; Luk, Gigi

    2016-01-01

    Although maltreatment is a known risk factor for multiple adverse outcomes across the lifespan, its effects on cognitive development, especially memory, are poorly understood. Using data from a large, nationally representative sample of young adults (Add Health), we examined the effects of physical and sexual abuse on working and short-term memory in adulthood. We examined the association between exposure to maltreatment as well as its timing of first onset after adjusting for covariates. Of our sample, 16.50% of respondents were exposed to physical abuse and 4.36% to sexual abuse by age 17. An analysis comparing unexposed respondents to those exposed to physical or sexual abuse did not yield any significant differences in adult memory performance. However, two developmental time periods emerged as important for shaping memory following exposure to sexual abuse, but in opposite ways. Relative to non-exposed respondents, those exposed to sexual abuse during early childhood (ages 3-5), had better number recall and those first exposed during adolescence (ages 14-17) had worse number recall. However, other variables, including socioeconomic status, played a larger role (than maltreatment) on working and short-term memory. We conclude that a simple examination of "exposed" versus "unexposed" respondents may obscure potentially important within-group differences that are revealed by examining the effects of age at onset to maltreatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  10. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  11. Nuclear refugees after large radioactive releases

    International Nuclear Information System (INIS)

    Pascucci-Cahen, Ludivine; Groell, Jérôme

    2016-01-01

    However improbable, large radioactive releases from a nuclear power plant would entail major consequences for the surrounding population. In Fukushima, 80,000 people had to evacuate the most contaminated areas around the NPP for a prolonged period of time. These people have been called “nuclear refugees”. The paper first argues that the number of nuclear refugees is a better measure of the severity of radiological consequences than the number of fatalities, although the latter is widely used to assess other catastrophic events such as earthquakes or tsunami. It is a valuable partial indicator in the context of comprehensive studies of overall consequences. Section 2 makes a clear distinction between long-term relocation and emergency evacuation and proposes a method to estimate the number of refugees. Section 3 examines the distribution of nuclear refugees with respect to weather and release site. The distribution is asymmetric and fat-tailed: unfavorable weather can lead to the contamination of large areas of land; large cities have in turn a higher probability of being contaminated. - Highlights: • Number of refugees is a good indicator of the severity of radiological consequences. • It is a better measure of the long-term consequences than the number of fatalities. • A representative meteorological sample should be sufficiently large. • The number of refugees highly depends on the release site in a country like France.

  12. Investigating sex differences in psychological predictors of snack intake among a large representative sample

    NARCIS (Netherlands)

    Adriaanse, M.A.; Evers, C.; Verhoeven, A.A.C.; de Ridder, D.T.D.

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of

  13. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin.

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-02-02

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin.

  14. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-01-01

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868

  15. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  16. Genetic Characterization of Echinococcus granulosus from a Large Number of Formalin-Fixed, Paraffin-Embedded Tissue Samples of Human Isolates in Iran

    Science.gov (United States)

    Rostami, Sima; Torbaghan, Shams Shariat; Dabiri, Shahriar; Babaei, Zahra; Mohammadi, Mohammad Ali; Sharbatkhori, Mitra; Harandi, Majid Fasihi

    2015-01-01

    Cystic echinococcosis (CE), caused by the larval stage of Echinococcus granulosus, presents an important medical and veterinary problem globally, including that in Iran. Different genotypes of E. granulosus have been reported from human isolates worldwide. This study identifies the genotype of the parasite responsible for human hydatidosis in three provinces of Iran using formalin-fixed paraffin-embedded tissue samples. In this study, 200 formalin-fixed paraffin-embedded tissue samples from human CE cases were collected from Alborz, Tehran, and Kerman provinces. Polymerase chain reaction amplification and sequencing of the partial mitochondrial cytochrome c oxidase subunit 1 gene were performed for genetic characterization of the samples. Phylogenetic analysis of the isolates from this study and reference sequences of different genotypes was done using a maximum likelihood method. In total, 54.4%, 0.8%, 1%, and 40.8% of the samples were identified as the G1, G2, G3, and G6 genotypes, respectively. The findings of the current study confirm the G1 genotype (sheep strain) to be the most prevalent genotype involved in human CE cases in Iran and indicates the high prevalence of the G6 genotype with a high infectivity for humans. Furthermore, this study illustrates the first documented human CE case in Iran infected with the G2 genotype. PMID:25535316

  17. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  18. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  19. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    Science.gov (United States)

    2014-01-01

    Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239

  1. Velocity dependent passive sampling for monitoring of micropollutants in dynamic stormwater discharges

    DEFF Research Database (Denmark)

    Birch, Heidi; Sharma, Anitha Kumari; Vezzaro, Luca

    2013-01-01

    Micropollutant monitoring in stormwater discharges is challenging because of the diversity of sources and thus large number of pollutants found in stormwater. This is further complicated by the dynamics in runoff flows and the large number of discharge points. Most passive samplers are non......-ideal for sampling such systems because they sample in a time-integrative manner. This paper reports test of a flow-through passive sampler, deployed in stormwater runoff at the outlet of a residential-industrial catchment. Momentum from the water velocity during runoff events created flow through the sampler...... resulting in velocity dependent sampling. This approach enables the integrative sampling of stormwater runoff during periods of weeks to months while weighting actual runoff events higher than no flow periods. Results were comparable to results from volume-proportional samples and results obtained from...

  2. Sample preparation and analysis of large 238PuO2 and ThO2 spheres

    International Nuclear Information System (INIS)

    Wise, R.L.; Selle, J.E.

    1975-01-01

    A program was initiated to determine the density gradient across a large spherical 238 PuO 2 sample produced by vacuum hot pressing. Due to the high thermal output of the ceramic a thin section was necessary to prevent overheating of the plastic mount. Techniques were developed for cross sectioning, mounting, grinding, and polishing of the sample. The polished samples were then analyzed on a quantitative image analyzer to determine the density as a function of location across the sphere. The techniques for indexing, analyzing, and reducing the data are described. Typical results obtained on a ThO 2 simulant sphere are given

  3. Relativistic rise measurements with very fine sampling intervals

    International Nuclear Information System (INIS)

    Ludlam, T.; Platner, E.D.; Polychronakos, V.A.; Lindenbaum, S.J.; Kramer, M.A.; Teramoto, Y.

    1980-01-01

    The motivation of this work was to determine whether the technique of charged particle identification via the relativistic rise in the ionization loss can be significantly improved by virtue of very small sampling intervals. A fast-sampling ADC and a longitudinal drift geometry were used to provide a large number of samples from a single drift chamber gap, achieving sampling intervals roughly 10 times smaller than any previous study. A single layer drift chamber was used, and tracks of 1 meter length were simulated by combining together samples from many identified particles in this detector. These data were used to study the resolving power for particle identification as a function of sample size, averaging technique, and the number of discrimination levels (ADC bits) used for pulse height measurements

  4. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    NARCIS (Netherlands)

    Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.

    2006-01-01

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods

  5. Application of Conventional and K0-Based Internal Monostandard NAA Using Reactor Neutrons for Compositional Analysis of Large Samples

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.; Swain, K. K.; Pujari, P.K.

    2018-01-01

    Large sample neutron activation analysis (LSNAA) work was carried out for samples of coal, uranium ore, stainless steel, ancient and new clay potteries, dross and clay pottery replica from Peru using low flux high thermalized irradiation sites. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated using thermal column (TC) facility of Apsara reactor as well as graphite reflector position of critical facility (CF) at Bhabha Atomic Research Centre, Mumbai. Small size (10 - 500 mg) samples were also irradiated at core position of Apsara reactor, pneumatic carrier facility (PCF) of Dhruva reactor and pneumatic fast transfer facility (PFTS) of KAMINI reactor. Irradiation positions were characterized using indium flux monitor for TC and CF whereas multi monitors were used at other positions. Radioactive assay was carried out using high resolution gamma ray spectrometry. The k0-based internal monostandard NAA (IM-NAA) method was used to determine elemental concentration ratios with respect to Na in coal and uranium ore samples, Sc in pottery samples and Fe in stainless steel. Insitu relative detection efficiency for each irradiated sample was obtained using γ rays of activation products in the required energy range. Representative sample sizes were arrived at for coal and uranium ore from the plots of La/Na ratios as a function of the mass of the sample. For stainless steel sample of SS 304L, the absolute concentrations were calculated from concentration ratios by mass balance approach since all the major elements (Fe, Cr, Ni and Mn) were amenable to NAA. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. The La to Ce concentration ratios were used for preliminary grouping and concentration ratios of 15 elements with respect to Sc were used by statistical cluster analysis for confirmation of grouping. Concentrations of Au and Ag were determined in not so

  6. Direct analysis of radionuclides-96 samples simultaneously

    International Nuclear Information System (INIS)

    Kessler, M.J.

    1991-01-01

    Recently, there has been a tremendous interest in two areas of concern in nuclear counting and radioactivity waste disposal. The first is the reduction of radioactive waste, in particular the reduction in the amount of or the development of environmentally safe scintillation cocktails. The second is the development of a simple method of quantitating large numbers of samples (thousands/day) in a short period of time (minutes). These two areas of concern have been addressed with the development of the Matrix 96 direct beta counter. This new instrumental technique is capable of quantitating 96 samples simultaneously in the microplate format (8 x 12, 96 sample) on a solid support WITHOUT the use of any cocktails, vials, and is non-destructive to the sample. The use of this technique for the following biomedical applications, DNA dot blots, cell proliferation (3H thymidine), receptor binding, chromium cytotoxicity assays, and protein assays will be discussed in detail. The data from both the conventional beta and gamma counter will be correlated and compared to the new Matrix 96 direct beta counter. This new technique provides a convenient method of addressing the concerns of reducing radioactive waste and provides a method of quantitating a large number of samples, accurately in a short period of time (96 at a time)

  7. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  8. The application of the central limit theorem and the law of large numbers to facial soft tissue depths: T-Table robustness and trends since 2008.

    Science.gov (United States)

    Stephan, Carl N

    2014-03-01

    By pooling independent study means (x¯), the T-Tables use the central limit theorem and law of large numbers to average out study-specific sampling bias and instrument errors and, in turn, triangulate upon human population means (μ). Since their first publication in 2008, new data from >2660 adults have been collected (c.30% of the original sample) making a review of the T-Table's robustness timely. Updated grand means show that the new data have negligible impact on the previously published statistics: maximum change = 1.7 mm at gonion; and ≤1 mm at 93% of all landmarks measured. This confirms the utility of the 2008 T-Table as a proxy to soft tissue depth population means and, together with updated sample sizes (8851 individuals at pogonion), earmarks the 2013 T-Table as the premier mean facial soft tissue depth standard for craniofacial identification casework. The utility of the T-Table, in comparison with shorths and 75-shormaxes, is also discussed. © 2013 American Academy of Forensic Sciences.

  9. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    Science.gov (United States)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  10. Phases of a stack of membranes in a large number of dimensions of configuration space

    Science.gov (United States)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  11. It's a Girl! Random Numbers, Simulations, and the Law of Large Numbers

    Science.gov (United States)

    Goodwin, Chris; Ortiz, Enrique

    2015-01-01

    Modeling using mathematics and making inferences about mathematical situations are becoming more prevalent in most fields of study. Descriptive statistics cannot be used to generalize about a population or make predictions of what can occur. Instead, inference must be used. Simulation and sampling are essential in building a foundation for…

  12. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  13. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-01

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.

  14. Cosmological implications of a large complete quasar sample.

    Science.gov (United States)

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  15. A study of diabetes mellitus within a large sample of Australian twins

    DEFF Research Database (Denmark)

    Condon, Julianne; Shaw, Joanne E; Luciano, Michelle

    2008-01-01

    with type 2 diabetes (T2D), 41 female pairs with gestational diabetes (GD), 5 pairs with impaired glucose tolerance (IGT) and one pair with MODY. Heritabilities of T1D, T2D and GD were all high, but our samples did not have the power to detect effects of shared environment unless they were very large......Twin studies of diabetes mellitus can help elucidate genetic and environmental factors in etiology and can provide valuable biological samples for testing functional hypotheses, for example using expression and methylation studies of discordant pairs. We searched the volunteer Australian Twin...... Registry (19,387 pairs) for twins with diabetes using disease checklists from nine different surveys conducted from 1980-2000. After follow-up questionnaires to the twins and their doctors to confirm diagnoses, we eventually identified 46 pairs where one or both had type 1 diabetes (T1D), 113 pairs...

  16. Is Business Failure Due to Lack of Effort? Empirical Evidence from a Large Administrative Sample

    NARCIS (Netherlands)

    Ejrnaes, M.; Hochguertel, S.

    2013-01-01

    Does insurance provision reduce entrepreneurs' effort to avoid business failure? We exploit unique features of the voluntary Danish unemployment insurance (UI) scheme, that is available to the self-employed. Using a large sample of self-employed individuals, we estimate the causal effect of

  17. In-situ high resolution particle sampling by large time sequence inertial spectrometry

    International Nuclear Information System (INIS)

    Prodi, V.; Belosi, F.

    1990-09-01

    In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)

  18. Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?

    OpenAIRE

    Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.

    2013-01-01

    Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...

  19. Direct and large eddy simulation of turbulent heat transfer at very low Prandtl number: Application to lead–bismuth flows

    International Nuclear Information System (INIS)

    Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.

    2012-01-01

    Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.

  20. Field sampling, preparation procedure and plutonium analyses of large freshwater samples

    International Nuclear Information System (INIS)

    Straelberg, E.; Bjerk, T.O.; Oestmo, K.; Brittain, J.E.

    2002-01-01

    This work is part of an investigation of the mobility of plutonium in freshwater systems containing humic substances. A well-defined bog-stream system located in the catchment area of a subalpine lake, Oevre Heimdalsvatn, Norway, is being studied. During the summer of 1999, six water samples were collected from the tributary stream Lektorbekken and the lake itself. However, the analyses showed that the plutonium concentration was below the detection limit in all the samples. Therefore renewed sampling at the same sites was carried out in August 2000. The results so far are in agreement with previous analyses from the Heimdalen area. However, 100 times higher concentrations are found in the lowlands in the eastern part of Norway. The reason for this is not understood, but may be caused by differences in the concentrations of humic substances and/or the fact that the mountain areas are covered with snow for a longer period of time every year. (LN)

  1. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  2. Statistical searches for microlensing events in large, non-uniformly sampled time-domain surveys: A test using palomar transient factory data

    Energy Technology Data Exchange (ETDEWEB)

    Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)

    2014-01-20

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  3. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  4. Gene coexpression measures in large heterogeneous samples using count statistics.

    Science.gov (United States)

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  5. Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging

    Science.gov (United States)

    Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.

    2017-08-01

    Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.

  6. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  7. A new fractionator principle with varying sampling fractions: exemplified by estimation of synapse number using electron microscopy

    DEFF Research Database (Denmark)

    Witgen, Brent Marvin; Grady, M. Sean; Nyengaard, Jens Randel

    2006-01-01

    The quantification of ultrastructure has been permanently improved by the application of new stereological principles. Both precision and efficiency have been enhanced. Here we report for the first time a fractionator method that can be applied at the electron microscopy level. This new design...... the total object number using section sampling fractions based on the average thickness of sections of variable thicknesses. As an alternative, this approach estimates the correct particle section sampling probability based on an estimator of the Horvitz-Thompson type, resulting in a theoretically more...

  8. Tracing the trajectory of skill learning with a very large sample of online game players.

    Science.gov (United States)

    Stafford, Tom; Dewar, Michael

    2014-02-01

    In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.

  9. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  10. The Brief Negative Symptom Scale (BNSS): Independent validation in a large sample of Italian patients with schizophrenia.

    Science.gov (United States)

    Mucci, A; Galderisi, S; Merlotti, E; Rossi, A; Rocca, P; Bucci, P; Piegari, G; Chieffi, M; Vignapiano, A; Maj, M

    2015-07-01

    The Brief Negative Symptom Scale (BNSS) was developed to address the main limitations of the existing scales for the assessment of negative symptoms of schizophrenia. The initial validation of the scale by the group involved in its development demonstrated good convergent and discriminant validity, and a factor structure confirming the two domains of negative symptoms (reduced emotional/verbal expression and anhedonia/asociality/avolition). However, only relatively small samples of patients with schizophrenia were investigated. Further independent validation in large clinical samples might be instrumental to the broad diffusion of the scale in clinical research. The present study aimed to examine the BNSS inter-rater reliability, convergent/discriminant validity and factor structure in a large Italian sample of outpatients with schizophrenia. Our results confirmed the excellent inter-rater reliability of the BNSS (the intraclass correlation coefficient ranged from 0.81 to 0.98 for individual items and was 0.98 for the total score). The convergent validity measures had r values from 0.62 to 0.77, while the divergent validity measures had r values from 0.20 to 0.28 in the main sample (n=912) and in a subsample without clinically significant levels of depression and extrapyramidal symptoms (n=496). The BNSS factor structure was supported in both groups. The study confirms that the BNSS is a promising measure for quantifying negative symptoms of schizophrenia in large multicenter clinical studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  11. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  12. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  13. Summary of experience from a large number of construction inspections; Wind power plant projects; Erfarenhetsaaterfoering fraan entreprenadbesiktningar

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Bertil; Holmberg, Rikard

    2010-08-15

    This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack

  14. A Genome-Wide Association Study in Large White and Landrace Pig Populations for Number Piglets Born Alive

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935

  15. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  16. High quality copy number and genotype data from FFPE samples using Molecular Inversion Probe (MIP) microarrays

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuker; Carlton, Victoria E.H.; Karlin-Neumann, George; Sapolsky, Ronald; Zhang, Li; Moorhead, Martin; Wang, Zhigang C.; Richardson, Andrea L.; Warren, Robert; Walther, Axel; Bondy, Melissa; Sahin, Aysegul; Krahe, Ralf; Tuna, Musaffe; Thompson, Patricia A.; Spellman, Paul T.; Gray, Joe W.; Mills, Gordon B.; Faham, Malek

    2009-02-24

    A major challenge facing DNA copy number (CN) studies of tumors is that most banked samples with extensive clinical follow-up information are Formalin-Fixed Paraffin Embedded (FFPE). DNA from FFPE samples generally underperforms or suffers high failure rates compared to fresh frozen samples because of DNA degradation and cross-linking during FFPE fixation and processing. As FFPE protocols may vary widely between labs and samples may be stored for decades at room temperature, an ideal FFPE CN technology should work on diverse sample sets. Molecular Inversion Probe (MIP) technology has been applied successfully to obtain high quality CN and genotype data from cell line and frozen tumor DNA. Since the MIP probes require only a small ({approx}40 bp) target binding site, we reasoned they may be well suited to assess degraded FFPE DNA. We assessed CN with a MIP panel of 50,000 markers in 93 FFPE tumor samples from 7 diverse collections. For 38 FFPE samples from three collections we were also able to asses CN in matched fresh frozen tumor tissue. Using an input of 37 ng genomic DNA, we generated high quality CN data with MIP technology in 88% of FFPE samples from seven diverse collections. When matched fresh frozen tissue was available, the performance of FFPE DNA was comparable to that of DNA obtained from matched frozen tumor (genotype concordance averaged 99.9%), with only a modest loss in performance in FFPE. MIP technology can be used to generate high quality CN and genotype data in FFPE as well as fresh frozen samples.

  17. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  18. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  19. Large-volume injection of sample diluents not miscible with the mobile phase as an alternative approach in sample preparation for bioanalysis: an application for fenspiride bioequivalence.

    Science.gov (United States)

    Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor

    2011-09-01

    Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.

  20. Psychometric Properties of the Penn State Worry Questionnaire for Children in a Large Clinical Sample

    Science.gov (United States)

    Pestle, Sarah L.; Chorpita, Bruce F.; Schiffman, Jason

    2008-01-01

    The Penn State Worry Questionnaire for Children (PSWQ-C; Chorpita, Tracey, Brown, Collica, & Barlow, 1997) is a 14-item self-report measure of worry in children and adolescents. Although the PSWQ-C has demonstrated favorable psychometric properties in small clinical and large community samples, this study represents the first psychometric…

  1. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    Science.gov (United States)

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  2. Analysis of reflection-peak wavelengths of sampled fiber Bragg gratings with large chirp.

    Science.gov (United States)

    Zou, Xihua; Pan, Wei; Luo, Bin

    2008-09-10

    The reflection-peak wavelengths (RPWs) in the spectra of sampled fiber Bragg gratings with large chirp (SFBGs-LC) are theoretically investigated. Such RPWs are divided into two parts, the RPWs of equivalent uniform SFBGs (U-SFBGs) and the wavelength shift caused by the large chirp in the grating period (CGP). We propose a quasi-equivalent transform to deal with the CGP. That is, the CGP is transferred into quasi-equivalent phase shifts to directly derive the Fourier transform of the refractive index modulation. Then, in the case of both the direct and the inverse Talbot effect, the wavelength shift is obtained from the Fourier transform. Finally, the RPWs of SFBGs-LC can be achieved by combining the wavelength shift and the RPWs of equivalent U-SFBGs. Several simulations are shown to numerically confirm these predicted RPWs of SFBGs-LC.

  3. Superwind Outflows in Seyfert Galaxies? : Large-Scale Radio Maps of an Edge-On Sample

    Science.gov (United States)

    Colbert, E.; Gallimore, J.; Baum, S.; O'Dea, C.

    1995-03-01

    Large-scale galactic winds (superwinds) are commonly found flowing out of the nuclear region of ultraluminous infrared and powerful starburst galaxies. Stellar winds and supernovae from the nuclear starburst provide the energy to drive these superwinds. The outflowing gas escapes along the rotation axis, sweeping up and shock-heating clouds in the halo, which produces optical line emission, radio synchrotron emission, and X-rays. These features can most easily be studied in edge-on systems, so that the wind emission is not confused by that from the disk. We have begun a systematic search for superwind outflows in Seyfert galaxies. In an earlier optical emission-line survey, we found extended minor axis emission and/or double-peaked emission line profiles in >~30% of the sample objects. We present here large-scale (6cm VLA C-config) radio maps of 11 edge-on Seyfert galaxies, selected (without bias) from a distance-limited sample of 23 edge-on Seyferts. These data have been used to estimate the frequency of occurrence of superwinds. Preliminary results indicate that four (36%) of the 11 objects observed and six (26%) of the 23 objects in the distance-limited sample have extended radio emission oriented perpendicular to the galaxy disk. This emission may be produced by a galactic wind blowing out of the disk. Two (NGC 2992 and NGC 5506) of the nine objects for which we have both radio and optical data show good evidence for a galactic wind in both datasets. We suggest that galactic winds occur in >~30% of all Seyferts. A goal of this work is to find a diagnostic that can be used to distinguish between large-scale outflows that are driven by starbursts and those that are driven by an AGN. The presence of starburst-driven superwinds in Seyferts, if established, would have important implications for the connection between starburst galaxies and AGN.

  4. Boll weevil: experimental sterilization of large numbers by fractionated irradiation

    International Nuclear Information System (INIS)

    Haynes, J.W.; Wright, J.E.; Davich, T.B.; Roberson, J.; Griffin, J.G.; Darden, E.

    1978-01-01

    Boll weevils, Anthonomus grandis grandis Boheman, 9 days after egg implantation in the larval diet were transported from the Boll Weevil Research Laboratory, Mississippi State, MS, to the Comparative Animal Research Laboratory, Oak Ridge, TN, and irradiated with 6.9 krad (test 1) or 7.2 krad (test 2) of 60 Co gamma rays delivered in 25 equal doses over 100 h. In test 1, from 600 individual pairs of T (treated) males x N (normal) females, only 114 eggs hatched from a sample of 950 eggs, and 47 adults emerged from a sample of 1042 eggs. Also, from 600 pairs of T females x N males, 6 eggs hatched of a sample of 6 eggs and 12 adults emerged from a sample of 20 eggs. In test 2, from 700 individual pairs of T males x N females, 54 eggs hatched from a sample of 1510, and 10 adults emerged from a sample of 1703 eggs. Also, in T females x N males matings, 1 egg hatched of a sample of 3, and no adults emerged from a sample of 4. Transportation and handling in the 2nd test reduced adult emergence an avg of 49%. Thus the 2 replicates in test 2 resulted in 3.4 x 10 5 and 4.3 x 10 5 irradiated weevils emerging/day for 7 days. Bacterial contamination of weevils was low

  5. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  6. Application of k0-based internal monostandard NAA for large sample analysis of clay pottery. As a part of inter comparison exercise

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2014-01-01

    As a part of inter comparison exercise of an IAEA Coordinated Research Project on large sample neutron activation analysis, a large size and non standard geometry size pottery replica (obtained from Peru) was analyzed by k 0 -based internal monostandard neutron activation analysis (IM-NAA). Two large size sub samples (0.40 and 0.25 kg) were irradiated at graphite reflector position of AHWR Critical Facility in BARC, Trombay, Mumbai, India. Small samples (100-200 mg) were also analyzed by IM-NAA for comparison purpose. Radioactive assay was carried out using a 40 % relative efficiency HPGe detector. To examine homogeneity of the sample, counting was also carried out using X-Z rotary scanning unit. In situ relative detection efficiency was evaluated using gamma rays of the activation products in the irradiated sample in the energy range of 122-2,754 keV. Elemental concentration ratios with respect to Na of small size (100 mg mass) as well as large size (15 and 400 g) samples were used to check the homogeneity of the samples. Concentration ratios of 18 elements such as K, Sc, Cr, Mn, Fe, Co, Zn, As, Rb, Cs, La, Ce, Sm, Eu, Yb, Lu, Hf and Th with respect to Na (internal mono standard) were calculated using IM-NAA. Absolute concentrations were arrived at for both large and small samples using Na concentration, obtained from relative method of NAA. The percentage combined uncertainties at ±1 s confidence limit on the determined values were in the range of 3-9 %. Two IAEA reference materials SL-1 and SL-3 were analyzed by IM-NAA to evaluate accuracy of the method. (author)

  7. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  8. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  9. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  10. Neurocognitive impairment in a large sample of homeless adults with mental illness.

    Science.gov (United States)

    Stergiopoulos, V; Cusi, A; Bekele, T; Skosireva, A; Latimer, E; Schütz, C; Fernando, I; Rourke, S B

    2015-04-01

    This study examines neurocognitive functioning in a large, well-characterized sample of homeless adults with mental illness and assesses demographic and clinical factors associated with neurocognitive performance. A total of 1500 homeless adults with mental illness enrolled in the At Home Chez Soi study completed neuropsychological measures assessing speed of information processing, memory, and executive functioning. Sociodemographic and clinical data were also collected. Linear regression analyses were conducted to examine factors associated with neurocognitive performance. Approximately half of our sample met criteria for psychosis, major depressive disorder, and alcohol or substance use disorder, and nearly half had experienced severe traumatic brain injury. Overall, 72% of participants demonstrated cognitive impairment, including deficits in processing speed (48%), verbal learning (71%) and recall (67%), and executive functioning (38%). The overall statistical model explained 19.8% of the variance in the neurocognitive summary score, with reduced neurocognitive performance associated with older age, lower education, first language other than English or French, Black or Other ethnicity, and the presence of psychosis. Homeless adults with mental illness experience impairment in multiple neuropsychological domains. Much of the variance in our sample's cognitive performance remains unexplained, highlighting the need for further research in the mechanisms underlying cognitive impairment in this population. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Characteristic Polynomials of Sample Covariance Matrices: The Non-Square Case

    OpenAIRE

    Kösters, Holger

    2009-01-01

    We consider the sample covariance matrices of large data matrices which have i.i.d. complex matrix entries and which are non-square in the sense that the difference between the number of rows and the number of columns tends to infinity. We show that the second-order correlation function of the characteristic polynomial of the sample covariance matrix is asymptotically given by the sine kernel in the bulk of the spectrum and by the Airy kernel at the edge of the spectrum. Similar results are g...

  12. Sample size and number of outcome measures of veterinary randomised controlled trials of pharmaceutical interventions funded by different sources, a cross-sectional study.

    Science.gov (United States)

    Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S

    2017-10-04

    Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.

  13. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    International Nuclear Information System (INIS)

    Kasviki, K.; Stamatelatos, I.E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-01-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv

  14. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kasviki, K. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece); Stamatelatos, I.E. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece)], E-mail: ion@ipta.demokritos.gr; Yannakopoulou, E. [Institute of Physical Chemistry, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Papadopoulou, P. [Institute of Technology of Agricultural Products, NAGREF, Lycovrissi, Attikis 14123 (Greece); Kalef-Ezra, J. [Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece)

    2007-10-15

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  15. Large sample NAA of a pottery replica utilizing thermal neutron flux at AHWR critical facility and X-Z rotary scanning unit

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2013-01-01

    Large sample neutron activation analysis (LSNAA) of a clay pottery replica from Peru was carried out using low neutron flux graphite reflector position of Advanced Heavy Water Reactor (AHWR) critical facility. This work was taken up as a part of inter-comparison exercise under IAEA CRP on LSNAA of archaeological objects. Irradiated large size sample, placed on an X-Z rotary scanning unit, was assayed using a 40% relative efficiency HPGe detector. The k 0 -based internal monostandard NAA (IM-NAA) in conjunction with insitu relative detection efficiency was used to calculate concentration ratios of 12 elements with respect to Na. Analyses of both small and large size samples were carried out to check homogeneity and to arrive at absolute concentrations. (author)

  16. A protocol for large scale genomic DNA isolation for cacao genetics ...

    African Journals Online (AJOL)

    Advances in DNA technology, such as marker assisted selection, detection of quantitative trait loci and genomic selection also require the isolation of DNA from a large number of samples and the preservation of tissue samples for future use in cacao genome studies. The present study proposes a method for the ...

  17. Malaria diagnosis from pooled blood samples: comparative analysis of real-time PCR, nested PCR and immunoassay as a platform for the molecular and serological diagnosis of malaria on a large-scale

    Directory of Open Access Journals (Sweden)

    Giselle FMC Lima

    2011-09-01

    Full Text Available Malaria diagnoses has traditionally been made using thick blood smears, but more sensitive and faster techniques are required to process large numbers of samples in clinical and epidemiological studies and in blood donor screening. Here, we evaluated molecular and serological tools to build a screening platform for pooled samples aimed at reducing both the time and the cost of these diagnoses. Positive and negative samples were analysed in individual and pooled experiments using real-time polymerase chain reaction (PCR, nested PCR and an immunochromatographic test. For the individual tests, 46/49 samples were positive by real-time PCR, 46/49 were positive by nested PCR and 32/46 were positive by immunochromatographic test. For the assays performed using pooled samples, 13/15 samples were positive by real-time PCR and nested PCR and 11/15 were positive by immunochromatographic test. These molecular methods demonstrated sensitivity and specificity for both the individual and pooled samples. Due to the advantages of the real-time PCR, such as the fast processing and the closed system, this method should be indicated as the first choice for use in large-scale diagnosis and the nested PCR should be used for species differentiation. However, additional field isolates should be tested to confirm the results achieved using cultured parasites and the serological test should only be adopted as a complementary method for malaria diagnosis.

  18. Calibration samples for accelerator mass spectrometry

    International Nuclear Information System (INIS)

    Hershberger, R.L.; Flynn, D.S.; Gabbard, F.

    1981-01-01

    Radioactive samples with precisely known numbers of atoms are useful as calibration sources for lifetime measurements using accelerator mass spectrometry. Such samples can be obtained in two ways: either by measuring the production rate as the sample is created or by measuring the decay rate after the sample has been obtained. The latter method requires that a large sample be produced and that the decay constant be accurately known. The former method is a useful and independent alternative, especially when the decay constant is not well known. The facilities at the University of Kentucky for precision measurements of total neutron production cross sections offer a source of such calibration samples. The possibilities, while quite extensive, would be limited to the proton rich side of the line of stability because of the use of (p,n) and (α,n) reactions for sample production

  19. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  20. Evaluation of Inflammatory Markers in a Large Sample of Obstructive Sleep Apnea Patients without Comorbidities

    Directory of Open Access Journals (Sweden)

    Izolde Bouloukaki

    2017-01-01

    Full Text Available Systemic inflammation is important in obstructive sleep apnea (OSA pathophysiology and its comorbidity. We aimed to assess the levels of inflammatory biomarkers in a large sample of OSA patients and to investigate any correlation between these biomarkers with clinical and polysomnographic (PSG parameters. This was a cross-sectional study in which 2983 patients who had undergone a polysomnography for OSA diagnosis were recruited. Patients with known comorbidities were excluded. Included patients (n=1053 were grouped according to apnea-hypopnea index (AHI as mild, moderate, and severe. Patients with AHI < 5 served as controls. Demographics, PSG data, and levels of high-sensitivity C-reactive protein (hs-CRP, fibrinogen, erythrocyte sedimentation rate (ESR, and uric acid (UA were measured and compared between groups. A significant difference was found between groups in hs-CRP, fibrinogen, and UA. All biomarkers were independently associated with OSA severity and gender (p<0.05. Females had increased levels of hs-CRP, fibrinogen, and ESR (p<0.001 compared to men. In contrast, UA levels were higher in men (p<0.001. Our results suggest that inflammatory markers significantly increase in patients with OSA without known comorbidities and correlate with OSA severity. These findings may have important implications regarding OSA diagnosis, monitoring, treatment, and prognosis. This trial is registered with ClinicalTrials.gov number NCT03070769.

  1. [DOE method for evaluating environmental and waste management samples: Revision 1, Addendum 1

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.

    1995-04-01

    The US Dapartment of Energy`s (DOE`s) environmental and waste management (EM) sampling and analysis activities require that large numbers of samples be analyzed for materials characterization, environmental surveillance, and site-remediation programs. The present document, DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), is a supplemental resource for analyzing many of these samples.

  2. [DOE method for evaluating environmental and waste management samples: Revision 1, Addendum 1

    International Nuclear Information System (INIS)

    Goheen, S.C.

    1995-04-01

    The US Dapartment of Energy's (DOE's) environmental and waste management (EM) sampling and analysis activities require that large numbers of samples be analyzed for materials characterization, environmental surveillance, and site-remediation programs. The present document, DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), is a supplemental resource for analyzing many of these samples

  3. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa

    NARCIS (Netherlands)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M; Genetic Consortium for Anorexia Nervosa, Wellcome Trust Case Control Consortium 3

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for

  4. Modern methods of sample preparation for GC analysis

    NARCIS (Netherlands)

    de Koning, S.; Janssen, H.-G.; Brinkman, U.A.Th.

    2009-01-01

    Today, a wide variety of techniques is available for the preparation of (semi-) solid, liquid and gaseous samples, prior to their instrumental analysis by means of capillary gas chromatography (GC) or, increasingly, comprehensive two-dimensional GC (GC × GC). In the past two decades, a large number

  5. "Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2011-09-01

    Full Text Available Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found not to have modeled the analyses to take account of the complex sample (Johnson & Elliott, 1998 even when publishing in highly-regarded journals. It is well known that failure to appropriately model the complex sample can substantially bias the results of the analysis. Examples presented in this paper highlight the risk of error of inference and mis-estimation of parameters from failure to analyze these data sets appropriately.

  6. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students

    OpenAIRE

    Song Wang; Ming Zhou; Taolin Chen; Xun Yang; Guangxiang Chen; Meiyun Wang; Qiyong Gong

    2017-01-01

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphome...

  7. TRAN-STAT: statistics for environmental studies, Number 22. Comparison of soil-sampling techniques for plutonium at Rocky Flats

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bernhardt, D.E.; Hahn, P.B.

    1983-01-01

    A summary of a field soil sampling study conducted around the Rocky Flats Colorado plant in May 1977 is preseted. Several different soil sampling techniques that had been used in the area were applied at four different sites. One objective was to comparethe average 239 - 240 Pu concentration values obtained by the various soil sampling techniques used. There was also interest in determining whether there are differences in the reproducibility of the various techniques and how the techniques compared with the proposed EPA technique of sampling to 1 cm depth. Statistically significant differences in average concentrations between the techniques were found. The differences could be largely related to the differences in sampling depth-the primary physical variable between the techniques. The reproducibility of the techniques was evaluated by comparing coefficients of variation. Differences between coefficients of variation were not statistically significant. Average (median) coefficients ranged from 21 to 42 percent for the five sampling techniques. A laboratory study indicated that various sample treatment and particle sizing techniques could increase the concentration of plutonium in the less than 10 micrometer size fraction by up to a factor of about 4 compared to the 2 mm size fraction

  8. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  9. Efficient Sample Tracking With OpenLabFramework

    DEFF Research Database (Denmark)

    List, Markus; Schmidt, Steffen; Trojnar, Jakub

    2014-01-01

    of samples created and need to be replaced with state-of-the-art laboratory information management systems. Such systems have been developed in large numbers, but they are often limited to specific research domains and types of data. One domain so far neglected is the management of libraries of vector clones...... and genetically engineered cell lines. OpenLabFramework is a newly developed web-application for sample tracking, particularly laid out to fill this gap, but with an open architecture allowing it to be extended for other biological materials and functional data. Its sample tracking mechanism is fully customizable...

  10. Crowdsourcing for large-scale mosquito (Diptera: Culicidae) sampling

    Science.gov (United States)

    Sampling a cosmopolitan mosquito (Diptera: Culicidae) species throughout its range is logistically challenging and extremely resource intensive. Mosquito control programmes and regional networks operate at the local level and often conduct sampling activities across much of North America. A method f...

  11. Hybrid algorithm of ensemble transform and importance sampling for assimilation of non-Gaussian observations

    Directory of Open Access Journals (Sweden)

    Shin'ya Nakano

    2014-05-01

    Full Text Available A hybrid algorithm that combines the ensemble transform Kalman filter (ETKF and the importance sampling approach is proposed. Since the ETKF assumes a linear Gaussian observation model, the estimate obtained by the ETKF can be biased in cases with nonlinear or non-Gaussian observations. The particle filter (PF is based on the importance sampling technique, and is applicable to problems with nonlinear or non-Gaussian observations. However, the PF usually requires an unrealistically large sample size in order to achieve a good estimation, and thus it is computationally prohibitive. In the proposed hybrid algorithm, we obtain a proposal distribution similar to the posterior distribution by using the ETKF. A large number of samples are then drawn from the proposal distribution, and these samples are weighted to approximate the posterior distribution according to the importance sampling principle. Since the importance sampling provides an estimate of the probability density function (PDF without assuming linearity or Gaussianity, we can resolve the bias due to the nonlinear or non-Gaussian observations. Finally, in the next forecast step, we reduce the sample size to achieve computational efficiency based on the Gaussian assumption, while we use a relatively large number of samples in the importance sampling in order to consider the non-Gaussian features of the posterior PDF. The use of the ETKF is also beneficial in terms of the computational simplicity of generating a number of random samples from the proposal distribution and in weighting each of the samples. The proposed algorithm is not necessarily effective in case that the ensemble is located distant from the true state. However, monitoring the effective sample size and tuning the factor for covariance inflation could resolve this problem. In this paper, the proposed hybrid algorithm is introduced and its performance is evaluated through experiments with non-Gaussian observations.

  12. Accurate, high-throughput typing of copy number variation using paralogue ratios from dispersed repeats.

    Science.gov (United States)

    Armour, John A L; Palla, Raquel; Zeeuwen, Patrick L J M; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J

    2007-01-01

    Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies.

  13. Prevalence and predictors of Axis I disorders in a large sample of treatment-seeking victims of sexual abuse and incest

    Directory of Open Access Journals (Sweden)

    Eoin McElroy

    2016-04-01

    Full Text Available Background: Childhood sexual abuse (CSA is a common occurrence and a robust, yet non-specific, predictor of adult psychopathology. While many demographic and abuse factors have been shown to impact this relationship, their common and specific effects remain poorly understood. Objective: This study sought to assess the prevalence of Axis I disorders in a large sample of help-seeking victims of sexual trauma, and to examine the common and specific effects of demographic and abuse characteristics across these different diagnoses. Method: The participants were attendees at four treatment centres in Denmark that provide psychological therapy for victims of CSA (N=434. Axis I disorders were assessed using the Millon Clinical Multiaxial Inventory-III (MCMI-III. Multivariate logistic regression analysis was used to examine the associations between CSA characteristics (age of onset, duration, number of abusers, number of abusive acts and 10 adult clinical syndromes. Results: There was significant variation in the prevalence of disorders and the abuse characteristics were differentially associated with the outcome variables. Having experienced sexual abuse from more than one perpetrator was the strongest predictor of psychopathology. Conclusions: The relationship between CSA and adult psychopathology is complex. Abuse characteristics have both unique and shared effects across different diagnoses.

  14. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  15. SyPRID sampler: A large-volume, high-resolution, autonomous, deep-ocean precision plankton sampling system

    Science.gov (United States)

    Billings, Andrew; Kaiser, Carl; Young, Craig M.; Hiebert, Laurel S.; Cole, Eli; Wagner, Jamie K. S.; Van Dover, Cindy Lee

    2017-03-01

    The current standard for large-volume (thousands of cubic meters) zooplankton sampling in the deep sea is the MOCNESS, a system of multiple opening-closing nets, typically lowered to within 50 m of the seabed and towed obliquely to the surface to obtain low-spatial-resolution samples that integrate across 10 s of meters of water depth. The SyPRID (Sentry Precision Robotic Impeller Driven) sampler is an innovative, deep-rated (6000 m) plankton sampler that partners with the Sentry Autonomous Underwater Vehicle (AUV) to obtain paired, large-volume plankton samples at specified depths and survey lines to within 1.5 m of the seabed and with simultaneous collection of sensor data. SyPRID uses a perforated Ultra-High-Molecular-Weight (UHMW) plastic tube to support a fine mesh net within an outer carbon composite tube (tube-within-a-tube design), with an axial flow pump located aft of the capture filter. The pump facilitates flow through the system and reduces or possibly eliminates the bow wave at the mouth opening. The cod end, a hollow truncated cone, is also made of UHMW plastic and includes a collection volume designed to provide an area where zooplankton can collect, out of the high flow region. SyPRID attaches as a saddle-pack to the Sentry vehicle. Sentry itself is configured with a flight control system that enables autonomous survey paths to low altitudes. In its verification deployment at the Blake Ridge Seep (2160 m) on the US Atlantic Margin, SyPRID was operated for 6 h at an altitude of 5 m. It recovered plankton samples, including delicate living larvae, from the near-bottom stratum that is seldom sampled by a typical MOCNESS tow. The prototype SyPRID and its next generations will enable studies of plankton or other particulate distributions associated with localized physico-chemical strata in the water column or above patchy habitats on the seafloor.

  16. Asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size.

    Science.gov (United States)

    Chen, Hua; Chen, Kun

    2013-07-01

    The distributions of coalescence times and ancestral lineage numbers play an essential role in coalescent modeling and ancestral inference. Both exact distributions of coalescence times and ancestral lineage numbers are expressed as the sum of alternating series, and the terms in the series become numerically intractable for large samples. More computationally attractive are their asymptotic distributions, which were derived in Griffiths (1984) for populations with constant size. In this article, we derive the asymptotic distributions of coalescence times and ancestral lineage numbers for populations with temporally varying size. For a sample of size n, denote by Tm the mth coalescent time, when m + 1 lineages coalesce into m lineages, and An(t) the number of ancestral lineages at time t back from the current generation. Similar to the results in Griffiths (1984), the number of ancestral lineages, An(t), and the coalescence times, Tm, are asymptotically normal, with the mean and variance of these distributions depending on the population size function, N(t). At the very early stage of the coalescent, when t → 0, the number of coalesced lineages n - An(t) follows a Poisson distribution, and as m → n, $$n\\left(n-1\\right){T}_{m}/2N\\left(0\\right)$$ follows a gamma distribution. We demonstrate the accuracy of the asymptotic approximations by comparing to both exact distributions and coalescent simulations. Several applications of the theoretical results are also shown: deriving statistics related to the properties of gene genealogies, such as the time to the most recent common ancestor (TMRCA) and the total branch length (TBL) of the genealogy, and deriving the allele frequency spectrum for large genealogies. With the advent of genomic-level sequencing data for large samples, the asymptotic distributions are expected to have wide applications in theoretical and methodological development for population genetic inference.

  17. Direct detection of rpoB and katG gene mutations of Mycobacterium tuberculosis in clinical samples

    Directory of Open Access Journals (Sweden)

    Sunil Pandey

    2017-08-01

    Conclusions: We can conclude that genetic mutation in Mycobacterium tuberculosis can be identified directly from the clinical samples. However, we have carried this study in less sample size and to validate research on large number of sample is recommended.

  18. Successful application of FTA Classic Card technology and use of bacteriophage phi29 DNA polymerase for large-scale field sampling and cloning of complete maize streak virus genomes.

    Science.gov (United States)

    Owor, Betty E; Shepherd, Dionne N; Taylor, Nigel J; Edema, Richard; Monjane, Adérito L; Thomson, Jennifer A; Martin, Darren P; Varsani, Arvind

    2007-03-01

    Leaf samples from 155 maize streak virus (MSV)-infected maize plants were collected from 155 farmers' fields in 23 districts in Uganda in May/June 2005 by leaf-pressing infected samples onto FTA Classic Cards. Viral DNA was successfully extracted from cards stored at room temperature for 9 months. The diversity of 127 MSV isolates was analysed by PCR-generated RFLPs. Six representative isolates having different RFLP patterns and causing either severe, moderate or mild disease symptoms, were chosen for amplification from FTA cards by bacteriophage phi29 DNA polymerase using the TempliPhi system. Full-length genomes were inserted into a cloning vector using a unique restriction enzyme site, and sequenced. The 1.3-kb PCR product amplified directly from FTA-eluted DNA and used for RFLP analysis was also cloned and sequenced. Comparison of cloned whole genome sequences with those of the original PCR products indicated that the correct virus genome had been cloned and that no errors were introduced by the phi29 polymerase. This is the first successful large-scale application of FTA card technology to the field, and illustrates the ease with which large numbers of infected samples can be collected and stored for downstream molecular applications such as diversity analysis and cloning of potentially new virus genomes.

  19. Set size and culture influence children's attention to number.

    Science.gov (United States)

    Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B

    2015-03-01

    Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  1. A NICE approach to managing large numbers of desktop PC's

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)

  2. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  3. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  4. Multiple-relaxation-time lattice Boltzmann model for incompressible miscible flow with large viscosity ratio and high Péclet number

    Science.gov (United States)

    Meng, Xuhui; Guo, Zhaoli

    2015-10-01

    A lattice Boltzmann model with a multiple-relaxation-time (MRT) collision operator is proposed for incompressible miscible flow with a large viscosity ratio as well as a high Péclet number in this paper. The equilibria in the present model are motivated by the lattice kinetic scheme previously developed by Inamuro et al. [Philos. Trans. R. Soc. London, Ser. A 360, 477 (2002), 10.1098/rsta.2001.0942]. The fluid viscosity and diffusion coefficient depend on both the corresponding relaxation times and additional adjustable parameters in this model. As a result, the corresponding relaxation times can be adjusted in proper ranges to enhance the performance of the model. Numerical validations of the Poiseuille flow and a diffusion-reaction problem demonstrate that the proposed model has second-order accuracy in space. Thereafter, the model is used to simulate flow through a porous medium, and the results show that the proposed model has the advantage to obtain a viscosity-independent permeability, which makes it a robust method for simulating flow in porous media. Finally, a set of simulations are conducted on the viscous miscible displacement between two parallel plates. The results reveal that the present model can be used to simulate, to a high level of accuracy, flows with large viscosity ratios and/or high Péclet numbers. Moreover, the present model is shown to provide superior stability in the limit of high kinematic viscosity. In summary, the numerical results indicate that the present lattice Boltzmann model is an ideal numerical tool for simulating flow with a large viscosity ratio and/or a high Péclet number.

  5. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  6. Ultrasensitive multiplex optical quantification of bacteria in large samples of biofluids

    Science.gov (United States)

    Pazos-Perez, Nicolas; Pazos, Elena; Catala, Carme; Mir-Simon, Bernat; Gómez-de Pedro, Sara; Sagales, Juan; Villanueva, Carlos; Vila, Jordi; Soriano, Alex; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.

    2016-01-01

    Efficient treatments in bacterial infections require the fast and accurate recognition of pathogens, with concentrations as low as one per milliliter in the case of septicemia. Detecting and quantifying bacteria in such low concentrations is challenging and typically demands cultures of large samples of blood (~1 milliliter) extending over 24–72 hours. This delay seriously compromises the health of patients. Here we demonstrate a fast microorganism optical detection system for the exhaustive identification and quantification of pathogens in volumes of biofluids with clinical relevance (~1 milliliter) in minutes. We drive each type of bacteria to accumulate antibody functionalized SERS-labelled silver nanoparticles. Particle aggregation on the bacteria membranes renders dense arrays of inter-particle gaps in which the Raman signal is exponentially amplified by several orders of magnitude relative to the dispersed particles. This enables a multiplex identification of the microorganisms through the molecule-specific spectral fingerprints. PMID:27364357

  7. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Science.gov (United States)

    Dufour, Nicholas; Redcay, Elizabeth; Young, Liane; Mavros, Penelope L; Moran, Joseph M; Triantafyllou, Christina; Gabrieli, John D E; Saxe, Rebecca

    2013-01-01

    Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462) individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31), using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  8. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Directory of Open Access Journals (Sweden)

    Nicholas Dufour

    Full Text Available Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462 individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31, using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  9. Presence and significant determinants of cognitive impairment in a large sample of patients with multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Martina Borghi

    Full Text Available OBJECTIVES: To investigate the presence and the nature of cognitive impairment in a large sample of patients with Multiple Sclerosis (MS, and to identify clinical and demographic determinants of cognitive impairment in MS. METHODS: 303 patients with MS and 279 healthy controls were administered the Brief Repeatable Battery of Neuropsychological tests (BRB-N; measures of pre-morbid verbal competence and neuropsychiatric measures were also administered. RESULTS: Patients and healthy controls were matched for age, gender, education and pre-morbid verbal Intelligence Quotient. Patients presenting with cognitive impairment were 108/303 (35.6%. In the overall group of participants, the significant predictors of the most sensitive BRB-N scores were: presence of MS, age, education, and Vocabulary. The significant predictors when considering MS patients only were: course of MS, age, education, vocabulary, and depression. Using logistic regression analyses, significant determinants of the presence of cognitive impairment in relapsing-remitting MS patients were: duration of illness (OR = 1.053, 95% CI = 1.010-1.097, p = 0.015, Expanded Disability Status Scale score (OR = 1.247, 95% CI = 1.024-1.517, p = 0.028, and vocabulary (OR = 0.960, 95% CI = 0.936-0.984, p = 0.001, while in the smaller group of progressive MS patients these predictors did not play a significant role in determining the cognitive outcome. CONCLUSIONS: Our results corroborate the evidence about the presence and the nature of cognitive impairment in a large sample of patients with MS. Furthermore, our findings identify significant clinical and demographic determinants of cognitive impairment in a large sample of MS patients for the first time. Implications for further research and clinical practice were discussed.

  10. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  11. The ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES) . I. Project description, survey sample, and quality assessment

    Science.gov (United States)

    Cox, Nick L. J.; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J.; Marshall, Charlotte C. M.; Smith, Keith T.; Evans, Christopher J.; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A.; Joblin, Christine; van Loon, Jacco Th.; Foing, Bernard H.; Bhatt, Neil H.; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G.; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco

    2017-10-01

    The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the C60^+ fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort tosystematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral-type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution (R 70 000-100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305-1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided.

  12. Those fascinating numbers

    CERN Document Server

    Koninck, Jean-Marie De

    2009-01-01

    Who would have thought that listing the positive integers along with their most remarkable properties could end up being such an engaging and stimulating adventure? The author uses this approach to explore elementary and advanced topics in classical number theory. A large variety of numbers are contemplated: Fermat numbers, Mersenne primes, powerful numbers, sublime numbers, Wieferich primes, insolite numbers, Sastry numbers, voracious numbers, to name only a few. The author also presents short proofs of miscellaneous results and constantly challenges the reader with a variety of old and new n

  13. Monte carlo sampling of fission multiplicity.

    Energy Technology Data Exchange (ETDEWEB)

    Hendricks, J. S. (John S.)

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  14. A large-scale cryoelectronic system for biological sample banking

    Science.gov (United States)

    Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.

    2009-11-01

    We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.

  15. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults.

    Directory of Open Access Journals (Sweden)

    Fang Chen

    Full Text Available Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419, we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010, and estimated an additive heritability of 44.7% (se: 3.7% for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1 whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2 whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the

  16. Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  17. Detecting superior face recognition skills in a large sample of young British adults

    Directory of Open Access Journals (Sweden)

    Anna Katarzyna Bobak

    2016-09-01

    Full Text Available The Cambridge Face Memory Test Long Form (CFMT+ and Cambridge Face Perception Test (CFPT are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognisers are discussed.

  18. Sampling and analysis plan for the consolidated sludge samples from the canisters and floor of the 105-K East basin

    International Nuclear Information System (INIS)

    BAKER, R.B.

    1999-01-01

    This Sampling and Analysis Plan (SAP) provides direction for sampling of fuel canister and floor Sludge from the K East Basin to complete the inventory of samples needed for Sludge treatment process testing. Sample volumes and sources consider recent reviews made by the Sludge treatment subproject. The representative samples will be characterized to the extent needed for the material to be used effectively for testing. Sampling equipment used allows drawing of large volume sludge samples and consolidation of sample material from a number of basin locations into one container. Once filled, the containers will be placed in a cask and transported to Hanford laboratories for recovery and evaluation. Included in the present SAP are the logic for sample location selection, laboratory analysis procedures required, and reporting needed to meet the Data Quality Objectives (DQOs) for this initiative

  19. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  20. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  1. Prevalence of learned grapheme-color pairings in a large online sample of synesthetes.

    Directory of Open Access Journals (Sweden)

    Nathan Witthoft

    Full Text Available In this paper we estimate the minimum prevalence of grapheme-color synesthetes with letter-color matches learned from an external stimulus, by analyzing a large sample of English-speaking grapheme-color synesthetes. We find that at least 6% (400/6588 participants of the total sample learned many of their matches from a widely available colored letter toy. Among those born in the decade after the toy began to be manufactured, the proportion of synesthetes with learned letter-color pairings approaches 15% for some 5-year periods. Among those born 5 years or more before it was manufactured, none have colors learned from the toy. Analysis of the letter-color matching data suggests the only difference between synesthetes with matches to the toy and those without is exposure to the stimulus. These data indicate learning of letter-color pairings from external contingencies can occur in a substantial fraction of synesthetes, and are consistent with the hypothesis that grapheme-color synesthesia is a kind of conditioned mental imagery.

  2. Large magnitude gridded ionization chamber for impurity identification in alpha emitting radioactive samples

    International Nuclear Information System (INIS)

    Santos, R.N. dos.

    1992-01-01

    This paper refers to a large magnitude gridded ionization chamber with high resolution used in the identification of α radioactive samples. The chamber and the electrode have been described in terms of their geometry and dimensions, as well as the best results listed accordingly. Several α emitting radioactive samples were used with a gas mixture of 90% Argon plus 10% Methane. We got α energy spectrum with resolution around 22,14 KeV in agreement to the best results available in the literature. The spectrum of α energy related to 92 U 233 was gotten using the ionization chamber mentioned in this work; several values were found which matched perfectly well adjustment curve of the chamber. Many other additional measures using different kinds of adjusted detectors were successfully obtained in order to confirm the results gotten in the experiments, thus leading to the identification of some elements of the 92 U 233 radioactive series. Such results show the possibility of using the chamber mentioned for measurements of α low activity contamination. (author)

  3. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  4. Analysis of plant hormones by microemulsion electrokinetic capillary chromatography coupled with on-line large volume sample stacking.

    Science.gov (United States)

    Chen, Zongbao; Lin, Zian; Zhang, Lin; Cai, Yan; Zhang, Lan

    2012-04-07

    A novel method of microemulsion electrokinetic capillary chromatography (MEEKC) coupled with on-line large volume sample stacking was developed for the analysis of six plant hormones including indole-3-acetic acid, indole-3-butyric acid, indole-3-propionic acid, 1-naphthaleneacetic acid, abscisic acid and salicylic acid. Baseline separation of six plant hormones was achieved within 10 min by using the microemulsion background electrolyte containing a 97.2% (w/w) 10 mM borate buffer at pH 9.2, 1.0% (w/w) ethyl acetate as oil droplets, 0.6% (w/w) sodium dodecyl sulphate as surfactant and 1.2% (w/w) 1-butanol as cosurfactant. In addition, an on-line concentration method based on a large volume sample stacking technique and multiple wavelength detection was adopted for improving the detection sensitivity in order to determine trace level hormones in a real sample. The optimal method provided about 50-100 fold increase in detection sensitivity compared with a single MEEKC method, and the detection limits (S/N = 3) were between 0.005 and 0.02 μg mL(-1). The proposed method was simple, rapid and sensitive and could be applied to the determination of six plant hormones in spiked water samples, tobacco leaves and 1-naphthylacetic acid in leaf fertilizer. The recoveries ranged from 76.0% to 119.1%, and good reproducibilities were obtained with relative standard deviations (RSDs) less than 6.6%.

  5. $b$-Tagging and Large Radius Jet Modelling in a $g\\rightarrow b\\bar{b}$ rich sample at ATLAS

    CERN Document Server

    Jiang, Zihao; The ATLAS collaboration

    2016-01-01

    Studies of b-tagging performance and jet properties in double b-tagged, large radius jets from sqrt(s)=8 TeV pp collisions recorded by the ATLAS detector at the LHC are presented. The double b-tag requirement yields a sample rich in high pT jets originating from the g->bb process. Using this sample, the performance of b-tagging and modelling of jet substructure variables at small b-quark angular separation is probed.

  6. A large-capacity sample-changer for automated gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An automatic sample-changer has been developed at the National Institute for Metallurgy for use in gamma-ray spectroscopy with a lithium-drifted germanium detector. The sample-changer features remote storage, which prevents cross-talk and reduces background. It has a capacity for 200 samples and a sample container that takes liquid or solid samples. The rotation and vibration of samples during counting ensure that powdered samples are compacted, and improve the precision and reproducibility of the counting geometry [af

  7. Psychological Predictors of Seeking Help from Mental Health Practitioners among a Large Sample of Polish Young Adults

    Directory of Open Access Journals (Sweden)

    Lidia Perenc

    2016-10-01

    Full Text Available Although the corresponding literature contains a substantial number of studies on the relationship between psychological factors and attitude towards seeking professional psychological help, the role of some determinants remains unexplored, especially among Polish young adults. The present study investigated diversity among a large cohort of Polish university students related to attitudes towards help-seeking and the regulative roles of gender, level of university education, health locus of control and sense of coherence. The total sample comprised 1706 participants who completed the following measures: Attitude Toward Seeking Professional Psychological Help Scale-SF, Multidimensional Health Locus of Control Scale, and Orientation to Life Questionnaire (SOC-29. They were recruited from various university faculties and courses by means of random selection. The findings revealed that, among socio-demographic variables, female gender moderately and graduate of university study strongly predict attitude towards seeking help. Internal locus of control and all domains of sense of coherence are significantly correlated with the scores related to the help-seeking attitude. Attitudes toward psychological help-seeking are significantly related to female gender, graduate university education, internal health locus of control and sense of coherence. Further research must be performed in Poland in order to validate these results in different age and social groups.

  8. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  9. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  10. Economic and Humanistic Burden of Osteoarthritis: A Systematic Review of Large Sample Studies.

    Science.gov (United States)

    Xie, Feng; Kovic, Bruno; Jin, Xuejing; He, Xiaoning; Wang, Mengxiao; Silvestre, Camila

    2016-11-01

    Osteoarthritis (OA) consumes a significant amount of healthcare resources, and impairs the health-related quality of life (HRQoL) of patients. Previous reviews have consistently found substantial variations in the costs of OA across studies and countries. The comparability between studies was poor and limited the detection of the true differences between these studies. To review large sample studies on measuring the economic and/or humanistic burden of OA published since May 2006. We searched MEDLINE and EMBASE databases using comprehensive search strategies to identify studies reporting economic burden and HRQoL of OA. We included large sample studies if they had a sample size ≥1000 and measured the cost and/or HRQoL of OA. Reviewers worked independently and in duplicate, performing a cross-check between groups to verify agreement. Within- and between-group consolidation was performed to resolve discrepancies, with outstanding discrepancies being resolved by an arbitrator. The Kappa statistic was reported to assess the agreement between the reviewers. All costs were adjusted in their original currency to year 2015 using published inflation rates for the country where the study was conducted, and then converted to 2015 US dollars. A total of 651 articles were screened by title and abstract, 94 were reviewed in full text, and 28 were included in the final review. The Kappa value was 0.794. Twenty studies reported direct costs and nine reported indirect costs. The total annual average direct costs varied from US$1442 to US$21,335, both in USA. The annual average indirect costs ranged from US$238 to US$29,935. Twelve studies measured HRQoL using various instruments. The Short Form 12 version 2 scores ranged from 35.0 to 51.3 for the physical component, and from 43.5 to 55.0 for the mental component. Health utilities varied from 0.30 for severe OA to 0.77 for mild OA. Per-patient OA costs are considerable and a patient's quality of life remains poor. Variations in

  11. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until

  12. LOGISTICS OF ECOLOGICAL SAMPLING ON LARGE RIVERS

    Science.gov (United States)

    The objectives of this document are to provide an overview of the logistical problems associated with the ecological sampling of boatable rivers and to suggest solutions to those problems. It is intended to be used as a resource for individuals preparing to collect biological dat...

  13. Process variations in surface nano geometries manufacture on large area substrates

    DEFF Research Database (Denmark)

    Calaon, Matteo; Hansen, Hans Nørgaard; Tosello, Guido

    2014-01-01

    The need of transporting, treating and measuring increasingly smaller biomedical samples has pushed the integration of a far reaching number of nanofeatures over large substrates size in respect to the conventional processes working area windows. Dimensional stability of nano fabrication processe...

  14. On the chromatic number of triangle-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2002-01-01

    We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c

  15. Quadruplex MAPH: improvement of throughput in high-resolution copy number screening.

    Science.gov (United States)

    Tyson, Jess; Majerus, Tamsin Mo; Walker, Susan; Armour, John Al

    2009-09-28

    Copy number variation (CNV) in the human genome is recognised as a widespread and important source of human genetic variation. Now the challenge is to screen for these CNVs at high resolution in a reliable, accurate and cost-effective way. Multiplex Amplifiable Probe Hybridisation (MAPH) is a sensitive, high-resolution technology appropriate for screening for CNVs in a defined region, for a targeted population. We have developed MAPH to a highly multiplexed format ("QuadMAPH") that allows the user a four-fold increase in the number of loci tested simultaneously. We have used this method to analyse a genomic region of 210 kb, including the MSH2 gene and 120 kb of flanking DNA. We show that the QuadMAPH probes report copy number with equivalent accuracy to simplex MAPH, reliably demonstrating diploid copy number in control samples and accurately detecting deletions in Hereditary Non-Polyposis Colorectal Cancer (HNPCC) samples. QuadMAPH is an accurate, high-resolution method that allows targeted screening of large numbers of subjects without the expense of genome-wide approaches. Whilst we have applied this technique to a region of the human genome, it is equally applicable to the genomes of other organisms.

  16. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    Science.gov (United States)

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  17. Stochastic coupled cluster theory: Efficient sampling of the coupled cluster expansion

    Science.gov (United States)

    Scott, Charles J. C.; Thom, Alex J. W.

    2017-09-01

    We consider the sampling of the coupled cluster expansion within stochastic coupled cluster theory. Observing the limitations of previous approaches due to the inherently non-linear behavior of a coupled cluster wavefunction representation, we propose new approaches based on an intuitive, well-defined condition for sampling weights and on sampling the expansion in cluster operators of different excitation levels. We term these modifications even and truncated selections, respectively. Utilising both approaches demonstrates dramatically improved calculation stability as well as reduced computational and memory costs. These modifications are particularly effective at higher truncation levels owing to the large number of terms within the cluster expansion that can be neglected, as demonstrated by the reduction of the number of terms to be sampled when truncating at triple excitations by 77% and hextuple excitations by 98%.

  18. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  19. CORRELATION ANALYSIS OF A LARGE SAMPLE OF NARROW-LINE SEYFERT 1 GALAXIES: LINKING CENTRAL ENGINE AND HOST PROPERTIES

    International Nuclear Information System (INIS)

    Xu Dawei; Komossa, S.; Wang Jing; Yuan Weimin; Zhou Hongyan; Lu Honglin; Li Cheng; Grupe, Dirk

    2012-01-01

    We present a statistical study of a large, homogeneously analyzed sample of narrow-line Seyfert 1 (NLS1) galaxies, accompanied by a comparison sample of broad-line Seyfert 1 (BLS1) galaxies. Optical emission-line and continuum properties are subjected to correlation analyses, in order to identify the main drivers of the correlation space of active galactic nuclei (AGNs), and of NLS1 galaxies in particular. For the first time, we have established the density of the narrow-line region as a key parameter in Eigenvector 1 space, as important as the Eddington ratio L/L Edd . This is important because it links the properties of the central engine with the properties of the host galaxy, i.e., the interstellar medium (ISM). We also confirm previously found correlations involving the line width of Hβ and the strength of the Fe II and [O III] λ5007 emission lines, and we confirm the important role played by L/L Edd in driving the properties of NLS1 galaxies. A spatial correlation analysis shows that large-scale environments of the BLS1 and NLS1 galaxies of our sample are similar. If mergers are rare in our sample, accretion-driven winds, on the one hand, or bar-driven inflows, on the other hand, may account for the strong dependence of Eigenvector 1 on ISM density.

  20. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  1. Sampling data summary for the ninth run of the Large Slurry Fed Melter

    International Nuclear Information System (INIS)

    Sabatino, D.M.

    1983-01-01

    The ninth experimental run of the Large Slurry Fed Melter (LSFM) was completed June 27, 1983, after 63 days of continuous operation. During the run, the various melter and off-gas streams were sampled and analyzed to determine melter material balances and to characterize off-gas emissions. Sampling methods and preliminary results were reported earlier. The emphasis was on the chemical analyses of the off-gas entrainment, deposits, and scrubber liquid. The significant sampling results from the run are summarized below: Flushing the Frit 165 with Frit 131 without bubbler agitation required 3 to 4.5 melter volumes. The off-gas cesium concentration during feeding was on the order of 36 to 56 μgCs/scf. The cesium concentration in the melter plenum (based on air in leakage only) was on the order of 110 to 210 μgCs/scf. Using <1 micron as the cut point for semivolatile material 60% of the chloride, 35% of the sodium and less than 5% of the managanese and iron in the entrainment are present as semivolatiles. A material balance on the scrubber tank solids shows good agreement with entrainment data. An overall cesium balance using LSFM-9 data and the DWPF production rate indicates an emission of 0.11 mCi/yr of cesium from the DWPF off-gas. This is a factor of 27 less than the maximum allowable 3 mCi/yr

  2. Search for large extra dimensions in final states containing one photon or jet and large missing transverse energy produced in pp collisions at square root[s]=1.96 TeV.

    Science.gov (United States)

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-10-31

    We present the results of searches for large extra dimensions in samples of events with large missing transverse energy E_{T} and either a photon or a jet produced in pp[over ] collisions at sqrt[s]=1.96 TeV collected with the Collider Detector at Fermilab II. For gamma+E_{T} and jet+E_{T} candidate samples corresponding to 2.0 and 1.1 fb;{-1} of integrated luminosity, respectively, we observe good agreement with standard model expectations and obtain a combined lower limit on the fundamental parameter of the large extra dimensions model M_{D} as a function of the number of extra dimensions in the model.

  3. A Survey for Spectroscopic Binaries in a Large Sample of G Dwarfs

    Science.gov (United States)

    Udry, S.; Mayor, M.; Latham, D. W.; Stefanik, R. P.; Torres, G.; Mazeh, T.; Goldberg, D.; Andersen, J.; Nordstrom, B.

    For more than 5 years now, the radial velocities for a large sample of G dwarfs (3,347 stars) have been monitored in order to obtain an unequaled set of orbital parameters for solar-type stars (~400 orbits, up to now). This survey provides a considerable improvement on the classical systematic study by Duquennoy and Mayor (1991; DM91). The observational part of the survey has been carried out in the context of a collaboration between the Geneva Observatory on the two coravel spectrometers for the southern sky and CfA at Oakridge and Whipple Observatories for the northern sky. As a first glance at these new results, we will address in this contribution a special aspect of the orbital eccentricity distribution, namely the disappearance of the void observed in DM91 for quasi-circular orbits with periods larger than 10 days.

  4. Analysis of a large number of clinical studies for breast cancer radiotherapy: estimation of radiobiological parameters for treatment planning

    International Nuclear Information System (INIS)

    Guerrero, M; Li, X Allen

    2003-01-01

    Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro

  5. Determining the number of samples required for decisions concerning remedial actions at hazardous waste sites

    International Nuclear Information System (INIS)

    Skiles, J.L.; Redfearn, A.; White, R.K.

    1991-01-01

    The processing of collecting, analyzing, and assessing the data needed to make to make decisions concerning the cleanup of hazardous waste sites is quite complex and often very expensive. This is due to the many elements that must be considered during remedial investigations. The decision maker must have sufficient data to determine the potential risks to human health and the environment and to verify compliance with regulatory requirements, given the availability of resources allocated for a site, and time constraints specified for the completion of the decision making process. It is desirable to simplify the remedial investigation procedure as much as possible to conserve both time and resources while, simultaneously, minimizing the probability of error associated with each decision to be made. With this in mind, it is necessary to have a practical and statistically valid technique for estimating the number of on-site samples required to ''guarantee'' that the correct decisions are made with a specified precision and confidence level. Here, we will examine existing methodologies and then develop our own approach for determining a statistically defensible sample size based on specific guidelines that have been established for the risk assessment process

  6. Sampling in schools and large institutional buildings: Implications for regulations, exposure and management of lead and copper.

    Science.gov (United States)

    Doré, Evelyne; Deshommes, Elise; Andrews, Robert C; Nour, Shokoufeh; Prévost, Michèle

    2018-04-21

    Legacy lead and copper components are ubiquitous in plumbing of large buildings including schools that serve children most vulnerable to lead exposure. Lead and copper samples must be collected after varying stagnation times and interpreted in reference to different thresholds. A total of 130 outlets (fountains, bathroom and kitchen taps) were sampled for dissolved and particulate lead as well as copper. Sampling was conducted at 8 schools and 3 institutional (non-residential) buildings served by municipal water of varying corrosivity, with and without corrosion control (CC), and without a lead service line. Samples included first draw following overnight stagnation (>8h), partial (30 s) and fully (5 min) flushed, and first draw after 30 min of stagnation. Total lead concentrations in first draw samples after overnight stagnation varied widely from 0.07 to 19.9 μg Pb/L (median: 1.7 μg Pb/L) for large buildings served with non-corrosive water. Higher concentrations were observed in schools with corrosive water without CC (0.9-201 μg Pb/L, median: 14.3 μg Pb/L), while levels in schools with CC ranged from 0.2 to 45.1 μg Pb/L (median: 2.1 μg Pb/L). Partial flushing (30 s) and full flushing (5 min) reduced concentrations by 88% and 92% respectively for corrosive waters without CC. Lead concentrations were 45% than values in 1st draw samples collected after overnight stagnation. Concentrations of particulate Pb varied widely (≥0.02-846 μg Pb/L) and was found to be the cause of very high total Pb concentrations in the 2% of samples exceeding 50 μg Pb/L. Pb levels across outlets within the same building varied widely (up to 1000X) especially in corrosive water (0.85-851 μg Pb/L after 30MS) confirming the need to sample at each outlet to identify high risk taps. Based on the much higher concentrations observed in first draw samples, even after a short stagnation, the first 250mL should be discarded unless no sources

  7. Large area optical mapping of surface contact angle.

    Science.gov (United States)

    Dutra, Guilherme; Canning, John; Padden, Whayne; Martelli, Cicero; Dligatch, Svetlana

    2017-09-04

    Top-down contact angle measurements have been validated and confirmed to be as good if not more reliable than side-based measurements. A range of samples, including industrially relevant materials for roofing and printing, has been compared. Using the top-down approach, mapping in both 1-D and 2-D has been demonstrated. The method was applied to study the change in contact angle as a function of change in silver (Ag) nanoparticle size controlled by thermal evaporation. Large area mapping reveals good uniformity for commercial Aspen paper coated with black laser printer ink. A demonstration of the forensic and chemical analysis potential in 2-D is shown by uncovering the hidden CsF initials made with mineral oil on the coated Aspen paper. The method promises to revolutionize nanoscale characterization and industrial monitoring as well as chemical analyses by allowing rapid contact angle measurements over large areas or large numbers of samples in ways and times that have not been possible before.

  8. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    Science.gov (United States)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  9. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  10. Analysis of the research sample collections of Uppsala biobank.

    Science.gov (United States)

    Engelmark, Malin T; Beskow, Anna H

    2014-10-01

    Uppsala Biobank is the joint and only biobank organization of the two principals, Uppsala University and Uppsala University Hospital. Biobanks are required to have updated registries on sample collection composition and management in order to fulfill legal regulations. We report here the results from the first comprehensive and overall analysis of the 131 research sample collections organized in the biobank. The results show that the median of the number of samples in the collections was 700 and that the number of samples varied from less than 500 to over one million. Blood samples, such as whole blood, serum, and plasma, were included in the vast majority, 84.0%, of the research sample collections. Also, as much as 95.5% of the newly collected samples within healthcare included blood samples, which further supports the concept that blood samples have fundamental importance for medical research. Tissue samples were also commonly used and occurred in 39.7% of the research sample collections, often combined with other types of samples. In total, 96.9% of the 131 sample collections included samples collected for healthcare, showing the importance of healthcare as a research infrastructure. Of the collections that had accessed existing samples from healthcare, as much as 96.3% included tissue samples from the Department of Pathology, which shows the importance of pathology samples as a resource for medical research. Analysis of different research areas shows that the most common of known public health diseases are covered. Collections that had generated the most publications, up to over 300, contained a large number of samples collected systematically and repeatedly over many years. More knowledge about existing biobank materials, together with public registries on sample collections, will support research collaborations, improve transparency, and bring us closer to the goals of biobanks, which is to save and prolong human lives and improve health and quality of life.

  11. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Science.gov (United States)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  12. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Directory of Open Access Journals (Sweden)

    Cina Aghamohammadi

    2018-02-01

    Full Text Available We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r, for any large real number r. Then, for a sequence of processes each labeled by an integer size N, we compare how the classical and quantum required memories scale with N. In this setting, since both memories can diverge as N→∞, the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N→∞, but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  13. A large sample of Kohonen selected E+A (post-starburst) galaxies from the Sloan Digital Sky Survey

    Science.gov (United States)

    Meusinger, H.; Brünecke, J.; Schalldach, P.; in der Au, A.

    2017-01-01

    Context. The galaxy population in the contemporary Universe is characterised by a clear bimodality, blue galaxies with significant ongoing star formation and red galaxies with only a little. The migration between the blue and the red cloud of galaxies is an issue of active research. Post starburst (PSB) galaxies are thought to be observed in the short-lived transition phase. Aims: We aim to create a large sample of local PSB galaxies from the Sloan Digital Sky Survey (SDSS) to study their characteristic properties, particularly morphological features indicative of gravitational distortions and indications for active galactic nuclei (AGNs). Another aim is to present a tool set for an efficient search in a large database of SDSS spectra based on Kohonen self-organising maps (SOMs). Methods: We computed a huge Kohonen SOM for ∼106 spectra from SDSS data release 7. The SOM is made fully available, in combination with an interactive user interface, for the astronomical community. We selected a large sample of PSB galaxies taking advantage of the clustering behaviour of the SOM. The morphologies of both PSB galaxies and randomly selected galaxies from a comparison sample in SDSS Stripe 82 (S82) were inspected on deep co-added SDSS images to search for indications of gravitational distortions. We used the Portsmouth galaxy property computations to study the evolutionary stage of the PSB galaxies and archival multi-wavelength data to search for hidden AGNs. Results: We compiled a catalogue of 2665 PSB galaxies with redshifts z 3 Å and z cloud, in agreement with the idea that PSB galaxies represent the transitioning phase between actively and passively evolving galaxies. The relative frequency of distorted PSB galaxies is at least 57% for EW(Hδ) > 5 Å, significantly higher than in the comparison sample. The search for AGNs based on conventional selection criteria in the radio and MIR results in a low AGN fraction of ∼2-3%. We confirm an MIR excess in the mean SED of

  14. Design development of robotic system for on line sampling in fuel reprocessing

    International Nuclear Information System (INIS)

    Balasubramanian, G.R.; Venugopal, P.R.; Padmashali, G.K.

    1990-01-01

    This presentation describes the design and developmental work that is being carried out for the design of an automated sampling system for fast reactor fuel reprocessing plants. The plant proposes to use integrated sampling system. The sample is taken across regular process streams from any intermediate hold up pot. A robot system is planned to take the sample from the sample pot, transfer it to the sample bottle, cap the bottle and transfer the bottle to a pneumatic conveying station. The system covers a large number of sample pots. Alternate automated systems are also examined (1). (author). 4 refs., 2 figs

  15. The electron transport problem sampling by Monte Carlo individual collision technique

    International Nuclear Information System (INIS)

    Androsenko, P.A.; Belousov, V.I.

    2005-01-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  16. A prototype splitter apparatus for dividing large catches of small fish

    Science.gov (United States)

    Stapanian, Martin A.; Edwards, William H.

    2012-01-01

    Due to financial and time constraints, it is often necessary in fisheries studies to divide large samples of fish and estimate total catch from the subsample. The subsampling procedure may involve potential human biases or may be difficult to perform in rough conditions. We present a prototype gravity-fed splitter apparatus for dividing large samples of small fish (30–100 mm TL). The apparatus features a tapered hopper with a sliding and removable shutter. The apparatus provides a comparatively stable platform for objectively obtaining subsamples, and it can be modified to accommodate different sizes of fish and different sample volumes. The apparatus is easy to build, inexpensive, and convenient to use in the field. To illustrate the performance of the apparatus, we divided three samples (total N = 2,000 fish) composed of four fish species. Our results indicated no significant bias in estimating either the number or proportion of each species from the subsample. Use of this apparatus or a similar apparatus can help to standardize subsampling procedures in large surveys of fish. The apparatus could be used for other applications that require dividing a large amount of material into one or more smaller subsamples.

  17. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  18. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  19. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  20. On the aspiration characteristics of large-diameter, thin-walled aerosol sampling probes at yaw orientations with respect to the wind

    International Nuclear Information System (INIS)

    Vincent, J.H.; Mark, D.; Smith, T.A.; Stevens, D.C.; Marshall, M.

    1986-01-01

    Experiments were carried out in a large wind tunnel to investigate the aspiration efficiencies of thin-walled aerosol sampling probes of large diameter (up to 50 mm) at orientations with respect to the wind direction ranging from 0 to 180 degrees. Sampling conditions ranged from sub-to super-isokinetic. The experiments employed test dusts of close-graded fused alumina and were conducted under conditions of controlled freestream turbulence. For orientations up to and including 90 degrees, the results were qualitatively and quantitatively consistent with a new physical model which takes account of the fact that the sampled air not only diverges or converges (depending on the relationship between wind speed and sampling velocity) but also turns to pass through the plane of the sampling orifice. The previously published results of Durham and Lundgren (1980) and Davies and Subari (1982) for smaller probes were also in good agreement with the new model. The model breaks down, however, for orientations greater than 90 degrees due to the increasing effect of particle impaction onto the blunt leading edge of the probe body. For the probe facing directly away from the wind (180 degree orientation), aspiration efficiency is dominated almost entirely by this effect. (author)

  1. Random numbers spring from alpha decay

    International Nuclear Information System (INIS)

    Frigerio, N.A.; Sanathanan, L.P.; Morley, M.; Clark, N.A.; Tyler, S.A.

    1980-05-01

    Congruential random number generators, which are widely used in Monte Carlo simulations, are deficient in that the number they generate are concentrated in a relatively small number of hyperplanes. While this deficiency may not be a limitation in small Monte Carlo studies involving a few variables, it introduces a significant bias in large simulations requiring high resolution. This bias was recognized and assessed during preparations for an accident analysis study of nuclear power plants. This report describes a random number device based on the radioactive decay of alpha particles from a 235 U source in a high-resolution gas proportional counter. The signals were fed to a 4096-channel analyzer and for each channel the frequency of signals registered in a 20,000-microsecond interval was recorded. The parity bits of these frequency counts (0 for an even count and 1 for an odd count) were then assembled in sequence to form 31-bit binary random numbers and transcribed to a magnetic tape. This cycle was repeated as many times as were necessary to create 3 million random numbers. The frequency distribution of counts from the present device conforms to the Brockwell-Moyal distribution, which takes into account the dead time of the counter (both the dead time and decay constant of the underlying Poisson process were estimated). Analysis of the count data and tests of randomness on a sample set of the 31-bit binary numbers indicate that this random number device is a highly reliable source of truly random numbers. Its use is, therefore, recommended in Monte Carlo simulations for which the congruential pseudorandom number generators are found to be inadequate. 6 figures, 5 tables

  2. Characterisation of large zooplankton sampled with two different gears during midwinter in Rijpfjorden, Svalbard

    Directory of Open Access Journals (Sweden)

    Błachowiak-Samołyk Katarzyna

    2017-12-01

    Full Text Available During a midwinter cruise north of 80°N to Rijpfjorden, Svalbard, the composition and vertical distribution of the zooplankton community were studied using two different samplers 1 a vertically hauled multiple plankton sampler (MPS; mouth area 0.25 m2, mesh size 200 μm and 2 a horizontally towed Methot Isaacs Kidd trawl (MIK; mouth area 3.14 m2, mesh size 1500 μm. Our results revealed substantially higher species diversity (49 taxa than if a single sampler (MPS: 38 taxa, MIK: 28 had been used. The youngest stage present (CIII of Calanus spp. (including C. finmarchicus and C. glacialis was sampled exclusively by the MPS, and the frequency of CIV copepodites in MPS was double that than in MIK samples. In contrast, catches of the CV-CVI copepodites of Calanus spp. were substantially higher in the MIK samples (3-fold and 5-fold higher for adult males and females, respectively. The MIK sampling clearly showed that the highest abundances of all three Thysanoessa spp. were in the upper layers, although there was a tendency for the larger-sized euphausiids to occur deeper. Consistent patterns for the vertical distributions of the large zooplankters (e.g. ctenophores, euphausiids collected by the MPS and MIK samplers provided more complete data on their abundances and sizes than obtained by the single net. Possible mechanisms contributing to the observed patterns of distribution, e.g. high abundances of both Calanus spp. and their predators (ctenophores and chaetognaths in the upper water layers during midwinter are discussed.

  3. Galaxy redshift surveys with sparse sampling

    International Nuclear Information System (INIS)

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-01-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V survey ∼ 10Gpc 3 ) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V survey , we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V survey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V survey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys

  4. Measurements in large JP-4 pool fires

    International Nuclear Information System (INIS)

    Keltner, N.R.; Kent, L.A.; Schneider, M.E.

    1987-01-01

    Over the past four years, Sandia National Laboratories has conducted a number of large pool fire tests to evaluate the design of radioactive material (RAM) shipping containers. Some of these tests have been designed to define the thermal environment and some have been used for certification testing. In each test there have been a number of fire diagnostic measurements. The simplest sets of diagnostics have involved measurements of temperature at several elevations on arrays of towers, measurements of hot wall heat flux with small calorimeters suspended from the towers, the average fuel recession rate, and the wind speed and direction. The most complex sets of diagnostics have included the above and in various tests added radiometers in the lower flame zone, centerline velocity measurements at a number of elevations, radiometers and calorimeters at the fuel surface, large cylindrical and flat plate calorimeters, infrared imaging, time resolved fuel recession rates, and a variety of soot particle concentration and size measurements made in the plume with a tethered balloon and an instrumented airplane. All of the large fires have been conducted in a 9.1 m by 18.3 m pool using JP-4 as the fuel. Typical duration is one-half hour. Covering all of the results is beyond the scope of a single paper. Conditionally sampled temperature and velocity measurements from one fire will be presented; for this fire, a 20 cm layer of fuel was floated on 61 cm of water. Pool surface heat flux, fuel recession rate data, and smoke emission data from a second fire are given. Because the wind has a strong effect on the temperature and velocity measurements, conditional sampling has been used to try to obtain data during periods of low winds. 10 refs., 3 figs

  5. Measurement uncertainty of ester number, acid number and patchouli alcohol of patchouli oil produced in Yogyakarta

    Science.gov (United States)

    Istiningrum, Reni Banowati; Saepuloh, Azis; Jannah, Wirdatul; Aji, Didit Waskito

    2017-03-01

    Yogyakarta is one of patchouli oil distillation center in Indonesia. The quality of patchouli oil greatly affect its market price. Therefore, testing quality of patchouli oil parameters is an important concern, one through determination of the measurement uncertainty. This study will determine the measurement uncertainty of ester number, acid number and content of patchouli alcohol through a bottom up approach. Source contributor to measurement uncertainty of ester number is a mass of the sample, a blank and sample titration volume, the molar mass of KOH, HCl normality, and replication. While the source contributor of the measurement uncertainty of acid number is the mass of the sample, the sample titration volume, the relative mass and normality of KOH, and repetition. Determination of patchouli alcohol by Gas Chromatography considers the sources of measurement uncertainty only from repeatability because reference materials are not available.

  6. Large-Eddy Simulation of a High Reynolds Number Flow Around a Cylinder Including Aeroacoustic Predictions

    Science.gov (United States)

    Spyropoulos, Evangelos T.; Holmes, Bayard S.

    1997-01-01

    The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.

  7. Multilevel systematic sampling to estimate total fruit number for yield forecasts

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Zamora, Felipe Aravena; Tellez, Camilla Potin

    2012-01-01

    procedure for unbiased estimation of fruit number for yield forecasts. In the Spring of 2009 we estimated the total number of fruit in several rows of each of 14 commercial fruit orchards growing apple (11 groves), kiwifruit (two groves), and table grapes (one grove) in central Chile. Survey times were 10...

  8. Aerodynamic Effects of High Turbulence Intensity on a Variable-Speed Power-Turbine Blade With Large Incidence and Reynolds Number Variations

    Science.gov (United States)

    Flegel, Ashlie B.; Giel, Paul W.; Welch, Gerard E.

    2014-01-01

    The effects of high inlet turbulence intensity on the aerodynamic performance of a variable speed power turbine blade are examined over large incidence and Reynolds number ranges. These results are compared to previous measurements made in a low turbulence environment. Both high and low turbulence studies were conducted in the NASA Glenn Research Center Transonic Turbine Blade Cascade Facility. The purpose of the low inlet turbulence study was to examine the transitional flow effects that are anticipated at cruise Reynolds numbers. The current study extends this to LPT-relevant turbulence levels while perhaps sacrificing transitional flow effects. Assessing the effects of turbulence at these large incidence and Reynolds number variations complements the existing database. Downstream total pressure and exit angle data were acquired for 10 incidence angles ranging from +15.8deg to -51.0deg. For each incidence angle, data were obtained at five flow conditions with the exit Reynolds number ranging from 2.12×10(exp 5) to 2.12×10(exp 6) and at a design exit Mach number of 0.72. In order to achieve the lowest Reynolds number, the exit Mach number was reduced to 0.35 due to facility constraints. The inlet turbulence intensity, Tu, was measured using a single-wire hotwire located 0.415 axial-chord upstream of the blade row. The inlet turbulence levels ranged from 8 to 15 percent for the current study. Tu measurements were also made farther upstream so that turbulence decay rates could be calculated as needed for computational inlet boundary conditions. Downstream flow field measurements were obtained using a pneumatic five-hole pitch/yaw probe located in a survey plane 7 percent axial chord aft of the blade trailing edge and covering three blade passages. Blade and endwall static pressures were acquired for each flow condition as well. The blade loading data show that the suction surface separation that was evident at many of the low Tu conditions has been eliminated. At

  9. Pattern transfer on large samples using a sub-aperture reactive ion beam

    Energy Technology Data Exchange (ETDEWEB)

    Miessler, Andre; Mill, Agnes; Gerlach, Juergen W.; Arnold, Thomas [Leibniz-Institut fuer Oberflaechenmodifizierung (IOM), Permoserstrasse 15, D-04318 Leipzig (Germany)

    2011-07-01

    In comparison to sole Ar ion beam sputtering Reactive Ion Beam Etching (RIBE) reveals the main advantage of increasing the selectivity for different kind of materials due to chemical contributions during the material removal. Therefore RIBE is qualified to be an excellent candidate for pattern transfer applications. The goal of the present study is to apply a sub-aperture reactive ion beam for pattern transfer on large fused silica samples. Concerning this matter, the etching behavior in the ion beam periphery plays a decisive role. Using CF{sub 4} as reactive gas, XPS measurements of the modified surface exposes impurities like Ni, Fe and Cr, which belongs to chemically eroded material of the plasma pot as well as an accumulation of carbon (up to 40 atomic percent) in the beam periphery, respectively. The substitution of CF{sub 4} by NF{sub 3} as reactive gas reveals a lot of benefits: more stable ion beam conditions in combination with a reduction of the beam size down to a diameter of 5 mm and a reduced amount of the Ni, Fe and Cr contaminations. However, a layer formation of silicon nitride handicaps the chemical contribution of the etching process. These negative side effects influence the transfer of trench structures on quartz by changing the selectivity due to altered chemical reaction of the modified resist layer. Concerning this we investigate the pattern transfer on large fused silica plates using NF{sub 3}-sub-aperture RIBE.

  10. Determination of the plutonium contamination level in biological samples by liquid scintillation

    International Nuclear Information System (INIS)

    Willemot, J.M.; Verry, M.; Lataillade, G.

    1989-01-01

    Usual radiochemical processes are unable to carry out without delay the very large number of analyses as required in plutonium toxicology studies. Liquid scintillation is the best method to quickly determine plutonium contamination levels in most various samples (bone, organs,...) [fr

  11. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    Science.gov (United States)

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  12. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    Science.gov (United States)

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  13. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  14. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System: Outage-Limited Scenario

    KAUST Repository

    Makki, Behrooz

    2016-03-22

    This paper investigates the performance of the point-To-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas, which are required to satisfy different outage probability constraints. Our results are obtained for different fading conditions and the effect of the power amplifiers efficiency/feedback error probability on the performance of the MIMO-HARQ systems is analyzed. Then, we use some recent results on the achievable rates of finite block-length codes, to analyze the effect of the codewords lengths on the system performance. Moreover, we derive closed-form expressions for the asymptotic performance of the MIMO-HARQ systems when the number of antennas increases. Our analytical and numerical results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 1972-2012 IEEE.

  15. Compressed sampling for boundary measurements in three-dimensional electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Soleimani, Manuchehr

    2013-01-01

    Electrical impedance tomography (EIT) utilizes electrodes on a medium's surface to produce measured data from which the conductivity distribution inside the medium is estimated. For the cases that relocation of electrodes is impractical or no a priori assumptions can be made to optimize the electrodes placement, a large number of electrodes may be needed to cover all possible imaging volume. This may occur in dynamically varying conductivity distribution in 3D EIT. Three-dimensional EIT then requires inverting very large linear systems to calculate the conductivity field, which causes significant problems regarding storage space and reconstruction time in addition to that data acquisition for a large number of electrodes will reduce the achievable frame rate, which is considered as major advantage of EIT imaging. This study proposes an idea to reduce the reconstruction complexity based on the well-known compressed sampling theory. By applying the so-called model-based CoSaMP algorithm to large size data collected by a 256 channel system, the size of forward operator and data acquisition time is reduced to those of a 32 channel system, while accuracy of reconstruction is significantly improved. The results demonstrate great capability of compressed sampling for overriding the challenges arising in 3D EIT. (paper)

  16. Rapid Surface Sampling and Archival Record (RSSAR) system. Final report, October 1995 - May 1997

    International Nuclear Information System (INIS)

    1998-01-01

    This report describes the results of Phase 2 efforts to develop a Rapid Surface Sampling and Archival Record (RSSAR) System for the detection of semivolatile organic contaminants on concrete, transite, and metal surfaces. The characterization of equipment and building surfaces for the presence of contaminants as part of building decontamination and decommissioning activities is an immensely large task of concern to both government and industry. Because of the high cost of hazardous waste disposal, old, contaminated buildings cannot simply be demolished and scrapped. Contaminated and clean materials must be clearly identified and segregated so that the clean material can be recycled or reused, if possible, or disposed of more cheaply as nonhazardous waste. DOE has a number of sites requiring surface characterization. These sites are large, contain very heterogeneous patterns of contamination (requiring high sampling density), and will thus necessitate an enormous number of samples to be taken and analyzed. Characterization of building and equipment surfaces will be needed during initial investigations, during cleanup operations, and during the final confirmation process, increasing the total number of samples well beyond that needed for initial characterization. This multiplicity of information places a premium on the ability to handle and track data as efficiently as possible

  17. Rapid Surface Sampling and Archival Record (RSSAR) system. Final report, October 1995--May 1997

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    This report describes the results of Phase 2 efforts to develop a Rapid Surface Sampling and Archival Record (RSSAR) System for the detection of semivolatile organic contaminants on concrete, transite, and metal surfaces. The characterization of equipment and building surfaces for the presence of contaminants as part of building decontamination and decommissioning activities is an immensely large task of concern to both government and industry. Because of the high cost of hazardous waste disposal, old, contaminated buildings cannot simply be demolished and scrapped. Contaminated and clean materials must be clearly identified and segregated so that the clean material can be recycled or reused, if possible, or disposed of more cheaply as nonhazardous waste. DOE has a number of sites requiring surface characterization. These sites are large, contain very heterogeneous patterns of contamination (requiring high sampling density), and will thus necessitate an enormous number of samples to be taken and analyzed. Characterization of building and equipment surfaces will be needed during initial investigations, during cleanup operations, and during the final confirmation process, increasing the total number of samples well beyond that needed for initial characterization. This multiplicity of information places a premium on the ability to handle and track data as efficiently as possible.

  18. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  19. Quadruplex MAPH: improvement of throughput in high-resolution copy number screening

    Directory of Open Access Journals (Sweden)

    Walker Susan

    2009-09-01

    Full Text Available Abstract Background Copy number variation (CNV in the human genome is recognised as a widespread and important source of human genetic variation. Now the challenge is to screen for these CNVs at high resolution in a reliable, accurate and cost-effective way. Results Multiplex Amplifiable Probe Hybridisation (MAPH is a sensitive, high-resolution technology appropriate for screening for CNVs in a defined region, for a targeted population. We have developed MAPH to a highly multiplexed format ("QuadMAPH" that allows the user a four-fold increase in the number of loci tested simultaneously. We have used this method to analyse a genomic region of 210 kb, including the MSH2 gene and 120 kb of flanking DNA. We show that the QuadMAPH probes report copy number with equivalent accuracy to simplex MAPH, reliably demonstrating diploid copy number in control samples and accurately detecting deletions in Hereditary Non-Polyposis Colorectal Cancer (HNPCC samples. Conclusion QuadMAPH is an accurate, high-resolution method that allows targeted screening of large numbers of subjects without the expense of genome-wide approaches. Whilst we have applied this technique to a region of the human genome, it is equally applicable to the genomes of other organisms.

  20. Chaotic scattering: the supersymmetry method for large number of channels

    International Nuclear Information System (INIS)

    Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.

    1995-01-01

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  1. Chaotic scattering: the supersymmetry method for large number of channels

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)

    1995-01-23

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  2. Noise, sampling, and the number of projections in cone-beam CT with a flat-panel detector

    International Nuclear Information System (INIS)

    Zhao, Z.; Gang, G. J.; Siewerdsen, J. H.

    2014-01-01

    Purpose: To investigate the effect of the number of projection views on image noise in cone-beam CT (CBCT) with a flat-panel detector. Methods: This fairly fundamental consideration in CBCT system design and operation was addressed experimentally (using a phantom presenting a uniform medium as well as statistically motivated “clutter”) and theoretically (using a cascaded systems model describing CBCT noise) to elucidate the contributing factors of quantum noise (σ Q ), electronic noise (σ E ), and view aliasing (σ view ). Analysis included investigation of the noise, noise-power spectrum, and modulation transfer function as a function of the number of projections (N proj ), dose (D tot ), and voxel size (b vox ). Results: The results reveal a nonmonotonic relationship between image noise andN proj at fixed total dose: for the CBCT system considered, noise decreased with increasing N proj due to reduction of view sampling effects in the regime N proj proj due to increased electronic noise. View sampling effects were shown to depend on the heterogeneity of the object in a direct analytical relationship to power-law anatomical clutter of the form κ/f  β —and a general model of individual noise components (σ Q , σ E , and σ view ) demonstrated agreement with measurements over a broad range in N proj , D tot , and b vox . Conclusions: The work elucidates fairly basic elements of CBCT noise in a manner that demonstrates the role of distinct noise components (viz., quantum, electronic, and view sampling noise). For configurations fairly typical of CBCT with a flat-panel detector (FPD), the analysis reveals a “sweet spot” (i.e., minimum noise) in the rangeN proj ∼ 250–350, nearly an order of magnitude lower in N proj than typical of multidetector CT, owing to the relatively high electronic noise in FPDs. The analysis explicitly relates view aliasing and quantum noise in a manner that includes aspects of the object (“clutter”) and imaging chain

  3. Neutrino number of the universe

    International Nuclear Information System (INIS)

    Kolb, E.W.

    1981-01-01

    The influence of grand unified theories on the lepton number of the universe is reviewed. A scenario is presented for the generation of a large (>> 1) lepton number and a small (<< 1) baryon number. 15 references

  4. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  5. Association between time perspective and organic food consumption in a large sample of adults.

    Science.gov (United States)

    Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine

    2018-01-05

    Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of

  6. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  7. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  8. Growth of equilibrium structures built from a large number of distinct component types.

    Science.gov (United States)

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  9. Analysis of large databases in vascular surgery.

    Science.gov (United States)

    Nguyen, Louis L; Barshes, Neal R

    2010-09-01

    Large databases can be a rich source of clinical and administrative information on broad populations. These datasets are characterized by demographic and clinical data for over 1000 patients from multiple institutions. Since they are often collected and funded for other purposes, their use for secondary analysis increases their utility at relatively low costs. Advantages of large databases as a source include the very large numbers of available patients and their related medical information. Disadvantages include lack of detailed clinical information and absence of causal descriptions. Researchers working with large databases should also be mindful of data structure design and inherent limitations to large databases, such as treatment bias and systemic sampling errors. Withstanding these limitations, several important studies have been published in vascular care using large databases. They represent timely, "real-world" analyses of questions that may be too difficult or costly to address using prospective randomized methods. Large databases will be an increasingly important analytical resource as we focus on improving national health care efficacy in the setting of limited resources.

  10. Solid-Phase Extraction and Large-Volume Sample Stacking-Capillary Electrophoresis for Determination of Tetracycline Residues in Milk

    Directory of Open Access Journals (Sweden)

    Gabriela Islas

    2018-01-01

    Full Text Available Solid-phase extraction in combination with large-volume sample stacking-capillary electrophoresis (SPE-LVSS-CE was applied to measure chlortetracycline, doxycycline, oxytetracycline, and tetracycline in milk samples. Under optimal conditions, the proposed method had a linear range of 29 to 200 µg·L−1, with limits of detection ranging from 18.6 to 23.8 µg·L−1 with inter- and intraday repeatabilities < 10% (as a relative standard deviation in all cases. The enrichment factors obtained were from 50.33 to 70.85 for all the TCs compared with a conventional capillary zone electrophoresis (CZE. This method is adequate to analyze tetracyclines below the most restrictive established maximum residue limits. The proposed method was employed in the analysis of 15 milk samples from different brands. Two of the tested samples were positive for the presence of oxytetracycline with concentrations of 95 and 126 µg·L−1. SPE-LVSS-CE is a robust, easy, and efficient strategy for online preconcentration of tetracycline residues in complex matrices.

  11. The electron transport problem sampling by Monte Carlo individual collision technique

    Energy Technology Data Exchange (ETDEWEB)

    Androsenko, P.A.; Belousov, V.I. [Obninsk State Technical Univ. of Nuclear Power Engineering, Kaluga region (Russian Federation)

    2005-07-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  12. Does size really matter? A sensitivity analysis of number of seeds in a respondent-driven sampling study of gay, bisexual and other men who have sex with men in Vancouver, Canada.

    Science.gov (United States)

    Lachowsky, Nathan John; Sorge, Justin Tyler; Raymond, Henry Fisher; Cui, Zishan; Sereda, Paul; Rich, Ashleigh; Roth, Eric A; Hogg, Robert S; Moore, David M

    2016-11-16

    fell within 95% confidence intervals of all RDS-weighted estimates. Significant differences between the three RDS estimators were not observed. Within a sample of GBMSM in Vancouver, Canada, this RDS study suggests that when equilibrium and homophily are met, although potentially costly and time consuming, analysis is not negatively affected by large numbers of unproductive or lowly productive seeds.

  13. Clustering of samples and elements based on multi-variable chemical data

    International Nuclear Information System (INIS)

    Op de Beeck, J.

    1984-01-01

    Clustering and classification are defined in the context of multivariable chemical analysis data. Classical multi-variate techniques, commonly used to interpret such data, are shown to be based on probabilistic and geometrical principles which are not justified for analytical data, since in that case one assumes or expects a system of more or less systematically related objects (samples) as defined by measurements on more or less systematically interdependent variables (elements). For the specific analytical problem of data set concerning a large number of trace elements determined in a large number of samples, a deterministic cluster analysis can be used to develop the underlying classification structure. Three main steps can be distinguished: diagnostic evaluation and preprocessing of the raw input data; computation of a symmetric matrix with pairwise standardized dissimilarity values between all possible pairs of samples and/or elements; and ultrametric clustering strategy to produce the final classification as a dendrogram. The software packages designed to perform these tasks are discussed and final results are given. Conclusions are formulated concerning the dangers of using multivariate, clustering and classification software packages as a black-box

  14. Specific Antibodies Reacting with SV40 Large T Antigen Mimotopes in Serum Samples of Healthy Subjects.

    Directory of Open Access Journals (Sweden)

    Mauro Tognon

    Full Text Available Simian Virus 40, experimentally assayed in vitro in different animal and human cells and in vivo in rodents, was classified as a small DNA tumor virus. In previous studies, many groups identified Simian Virus 40 sequences in healthy individuals and cancer patients using PCR techniques, whereas others failed to detect the viral sequences in human specimens. These conflicting results prompted us to develop a novel indirect ELISA with synthetic peptides, mimicking Simian Virus 40 capsid viral protein antigens, named mimotopes. This immunologic assay allowed us to investigate the presence of serum antibodies against Simian Virus 40 and to verify whether Simian Virus 40 is circulating in humans. In this investigation two mimotopes from Simian Virus 40 large T antigen, the viral replication protein and oncoprotein, were employed to analyze for specific reactions to human sera antibodies. This indirect ELISA with synthetic peptides from Simian Virus 40 large T antigen was used to assay a new collection of serum samples from healthy subjects. This novel assay revealed that serum antibodies against Simian Virus 40 large T antigen mimotopes are detectable, at low titer, in healthy subjects aged from 18-65 years old. The overall prevalence of reactivity with the two Simian Virus 40 large T antigen peptides was 20%. This new ELISA with two mimotopes of the early viral regions is able to detect in a specific manner Simian Virus 40 large T antigen-antibody responses.

  15. Random Walks on Directed Networks: Inference and Respondent-Driven Sampling

    Directory of Open Access Journals (Sweden)

    Malmros Jens

    2016-06-01

    Full Text Available Respondent-driven sampling (RDS is often used to estimate population properties (e.g., sexual risk behavior in hard-to-reach populations. In RDS, already sampled individuals recruit population members to the sample from their social contacts in an efficient snowball-like sampling procedure. By assuming a Markov model for the recruitment of individuals, asymptotically unbiased estimates of population characteristics can be obtained. Current RDS estimation methodology assumes that the social network is undirected, that is, all edges are reciprocal. However, empirical social networks in general also include a substantial number of nonreciprocal edges. In this article, we develop an estimation method for RDS in populations connected by social networks that include reciprocal and nonreciprocal edges. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing edges of sampled individuals. The proposed estimators are evaluated on artificial and empirical networks and are shown to generally perform better than existing estimators. This is the case in particular when the fraction of directed edges in the network is large.

  16. Sampling procedures and tables

    International Nuclear Information System (INIS)

    Franzkowski, R.

    1980-01-01

    Characteristics, defects, defectives - Sampling by attributes and by variables - Sample versus population - Frequency distributions for the number of defectives or the number of defects in the sample - Operating characteristic curve, producer's risk, consumer's risk - Acceptable quality level AQL - Average outgoing quality AOQ - Standard ISQ 2859 - Fundamentals of sampling by variables for fraction defective. (RW)

  17. Assessing the validity of single-item life satisfaction measures: results from three large samples.

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E

    2014-12-01

    The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS)-a more psychometrically established measure. Two large samples from Washington (N = 13,064) and Oregon (N = 2,277) recruited by the Behavioral Risk Factor Surveillance System and a representative German sample (N = 1,312) recruited by the Germany Socio-Economic Panel were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62-0.64; disattenuated r = 0.78-0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001-0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS was very small (average absolute difference = 0.015-0.042). Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use.

  18. Results of the Characterization and Dissolution Tests of Samples from Tank 16H

    International Nuclear Information System (INIS)

    Hay, M.S.

    1999-01-01

    Samples from Tank 16H annulus and one sample from the tank interior were characterized to provide a source term for use in fate and transport modeling. Four of the annulus samples appeared to be similar based on visual examination and were combined to form a composite. One of the annulus samples appeared to be different from the other four based on visual examination and was analyzed separately. The analytical results of the tank interior sample indicate the sample is composed predominantly of iron containing compounds. Both of the annulus samples are composed mainly of sodium salts, however, the composite sample contained significantly more sludge/sand material of low solubilitity. The characterization of the tank 16H annulus and tank interior samples was hampered by the high dose rate and the nature of the samples. The difficulties resulted in large uncertainties in the analytical data. The large uncertainties coupled with the number of important species below detection limits indicate the need for reanalysis of the Tank 16H samples as funding becomes available. Recommendations on potential remedies for these difficulties are provided. In general, none of the reagents appeared to be effective in dissolving the composite sample even after two contacts at elevated temperature. In contrast to the composite sample, all of the reagents dissolved a large percentage of the HTF-087 solids after two contacts at ambient temperature

  19. The redshift number density evolution of Mg II absorption systems

    International Nuclear Information System (INIS)

    Chen Zhi-Fu

    2013-01-01

    We make use of the recent large sample of 17 042 Mg II absorption systems from Quider et al. to analyze the evolution of the redshift number density. Regardless of the strength of the absorption line, we find that the evolution of the redshift number density can be clearly distinguished into three different phases. In the intermediate redshift epoch (0.6 ≲ z ≲ 1.6), the evolution of the redshift number density is consistent with the non-evolution curve, however, the non-evolution curve over-predicts the values of the redshift number density in the early (z ≲ 0.6) and late (z ≳ 1.6) epochs. Based on the invariant cross-section of the absorber, the lack of evolution in the redshift number density compared to the non-evolution curve implies the galaxy number density does not evolve during the middle epoch. The flat evolution of the redshift number density tends to correspond to a shallow evolution in the galaxy merger rate during the late epoch, and the steep decrease of the redshift number density might be ascribed to the small mass of halos during the early epoch.

  20. ExSample. A library for sampling Sudakov-type distributions

    Energy Technology Data Exchange (ETDEWEB)

    Plaetzer, Simon

    2011-08-15

    Sudakov-type distributions are at the heart of generating radiation in parton showers as well as contemporary NLO matching algorithms along the lines of the POWHEG algorithm. In this paper, the C++ library ExSample is introduced, which implements adaptive sampling of Sudakov-type distributions for splitting kernels which are in general only known numerically. Besides the evolution variable, the splitting kernels can depend on an arbitrary number of other degrees of freedom to be sampled, and any number of further parameters which are fixed on an event-by-event basis. (orig.)

  1. ExSample. A library for sampling Sudakov-type distributions

    International Nuclear Information System (INIS)

    Plaetzer, Simon

    2011-08-01

    Sudakov-type distributions are at the heart of generating radiation in parton showers as well as contemporary NLO matching algorithms along the lines of the POWHEG algorithm. In this paper, the C++ library ExSample is introduced, which implements adaptive sampling of Sudakov-type distributions for splitting kernels which are in general only known numerically. Besides the evolution variable, the splitting kernels can depend on an arbitrary number of other degrees of freedom to be sampled, and any number of further parameters which are fixed on an event-by-event basis. (orig.)

  2. Improved grand canonical sampling of vapour-liquid transitions.

    Science.gov (United States)

    Wilding, Nigel B

    2016-10-19

    Simulation within the grand canonical ensemble is the method of choice for accurate studies of first order vapour-liquid phase transitions in model fluids. Such simulations typically employ sampling that is biased with respect to the overall number density in order to overcome the free energy barrier associated with mixed phase states. However, at low temperature and for large system size, this approach suffers a drastic slowing down in sampling efficiency. The culprits are geometrically induced transitions (stemming from the periodic boundary conditions) which involve changes in droplet shape from sphere to cylinder and cylinder to slab. Since the overall number density does not discriminate sufficiently between these shapes, it fails as an order parameter for biasing through the transitions. Here we report two approaches to ameliorating these difficulties. The first introduces a droplet shape based order parameter that generates a transition path from vapour to slab states for which spherical and cylindrical droplets are suppressed. The second simply biases with respect to the number density in a tetragonal subvolume of the system. Compared to the standard approach, both methods offer improved sampling, allowing estimates of coexistence parameters and vapor-liquid surface tension for larger system sizes and lower temperatures.

  3. Should warfarin or aspirin be stopped prior to prostate biopsy? An analysis of bleeding complications related to increasing sample number regimes

    International Nuclear Information System (INIS)

    Chowdhury, R.; Abbas, A.; Idriz, S.; Hoy, A.; Rutherford, E.E.; Smart, J.M.

    2012-01-01

    Aim: To determine whether patients undergoing transrectal ultrasound (TRUS)-guided prostate biopsy with increased sampling numbers are more likely to experience bleeding complications and whether warfarin or low-dose aspirin are independent risk factors. Materials and methods: 930 consecutive patients with suspected prostatic cancer were followed up after biopsy. Warfarin/low-dose aspirin was not stopped prior to the procedure. An eight to 10 sample regime TRUS-guided prostate biopsy was performed and patients were offered a questionnaire to complete 10 days after the procedure, to determine any immediate or delayed bleeding complications. Results: 902 patients returned completed questionnaires. 579 (64.2%) underwent eight core biopsies, 47 (5.2%) underwent nine, and 276 (30.6%) underwent 10. 68 were taking warfarin [mean international normalized ratio (INR) = 2.5], 216 were taking low-dose aspirin, one was taking both, and 617 were taking neither. 27.9% of those on warfarin and 33.8% of those on aspirin experienced haematuria. 37% of those on no blood-thinning medication experienced haematuria. 13.2% of those on warfarin and 14.4% of those on aspirin experienced rectal bleeding. 11.5% of those on no blood-thinning medication experienced rectal bleeding. 7.4% of those on warfarin and 12% of those on aspirin experienced haematospermia. 13.8% of those on neither experienced haematospermia. Regression analysis showed a significant association between increasing sampling number and occurrence of all bleeding complication types. There was no significant association between minor bleeding complications and warfarin use; however, there was a significant association between minor bleeding complications and low-dose aspirin use. There was no severe bleeding complication. Conclusion: There is an increased risk of bleeding complications following TRUS-guided prostate biopsy with increased sampling numbers but these are minor. There is also an increased risk with low

  4. Accurate measurement of gene copy number for human alpha-defensin DEFA1A3.

    Science.gov (United States)

    Khan, Fayeza F; Carpenter, Danielle; Mitchell, Laura; Mansouri, Omniah; Black, Holly A; Tyson, Jess; Armour, John A L

    2013-10-20

    Multi-allelic copy number variants include examples of extensive variation between individuals in the copy number of important genes, most notably genes involved in immune function. The definition of this variation, and analysis of its impact on function, has been hampered by the technical difficulty of large-scale but accurate typing of genomic copy number. The copy-variable alpha-defensin locus DEFA1A3 on human chromosome 8 commonly varies between 4 and 10 copies per diploid genome, and presents considerable challenges for accurate high-throughput typing. In this study, we developed two paralogue ratio tests and three allelic ratio measurements that, in combination, provide an accurate and scalable method for measurement of DEFA1A3 gene number. We combined information from different measurements in a maximum-likelihood framework which suggests that most samples can be assigned to an integer copy number with high confidence, and applied it to typing 589 unrelated European DNA samples. Typing the members of three-generation pedigrees provided further reassurance that correct integer copy numbers had been assigned. Our results have allowed us to discover that the SNP rs4300027 is strongly associated with DEFA1A3 gene copy number in European samples. We have developed an accurate and robust method for measurement of DEFA1A3 copy number. Interrogation of rs4300027 and associated SNPs in Genome-Wide Association Study SNP data provides no evidence that alpha-defensin copy number is a strong risk factor for phenotypes such as Crohn's disease, type I diabetes, HIV progression and multiple sclerosis.

  5. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  6. A simulative comparison of respondent driven sampling with incentivized snowball sampling--the "strudel effect".

    Science.gov (United States)

    Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2014-02-01

    Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  8. Validated sampling strategy for assessing contaminants in soil stockpiles

    International Nuclear Information System (INIS)

    Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel

    2005-01-01

    Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated

  9. Evaluation of PCR procedures for detecting and quantifying Leishmania donovani DNA in large numbers of dried human blood samples from a visceral leishmaniasis focus in Northern Ethiopia.

    Science.gov (United States)

    Abbasi, Ibrahim; Aramin, Samar; Hailu, Asrat; Shiferaw, Welelta; Kassahun, Aysheshm; Belay, Shewaye; Jaffe, Charles; Warburg, Alon

    2013-03-27

    Visceral Leishmaniasis (VL) is a disseminated protozoan infection caused by Leishmania donovani parasites which affects almost half a million persons annually. Most of these are from the Indian sub-continent, East Africa and Brazil. Our study was designed to elucidate the role of symptomatic and asymptomatic Leishmania donovani infected persons in the epidemiology of VL in Northern Ethiopia. The efficacy of quantitative real-time kinetoplast DNA/PCR (qRT-kDNA PCR) for detecting Leishmania donovani in dried-blood samples was assessed in volunteers living in an endemic focus. Of 4,757 samples, 680 (14.3%) were found positive for Leishmania k-DNA but most of those (69%) had less than 10 parasites/ml of blood. Samples were re-tested using identical protocols and only 59.3% of the samples with 10 parasite/ml or less were qRT-kDNA PCR positive the second time. Furthermore, 10.8% of the PCR negative samples were positive in the second test. Most samples with higher parasitemias remained positive upon re-examination (55/59 =93%). We also compared three different methods for DNA preparation. Phenol-chloroform was more efficient than sodium hydroxide or potassium acetate. DNA sequencing of ITS1 PCR products showed that 20/22 samples were Leishmania donovani while two had ITS1 sequences homologous to Leishmania major. Although qRT-kDNA PCR is a highly sensitive test, the dependability of low positives remains questionable. It is crucial to correlate between PCR parasitemia and infectivity to sand flies. While optimal sensitivity is achieved by targeting k-DNA, it is important to validate the causative species of VL by DNA sequencing.

  10. The Effect of Unequal Samples, Heterogeneity of Covariance Matrices, and Number of Variables on Discriminant Analysis Classification Tables and Related Statistics.

    Science.gov (United States)

    Spearing, Debra; Woehlke, Paula

    To assess the effect on discriminant analysis in terms of correct classification into two groups, the following parameters were systematically altered using Monte Carlo techniques: sample sizes; proportions of one group to the other; number of independent variables; and covariance matrices. The pairing of the off diagonals (or covariances) with…

  11. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  12. Characterizing the zenithal night sky brightness in large territories: how many samples per square kilometre are needed?

    Science.gov (United States)

    Bará, Salvador

    2018-01-01

    A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.

  13. Detecting the Land-Cover Changes Induced by Large-Physical Disturbances Using Landscape Metrics, Spatial Sampling, Simulation and Spatial Analysis

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2009-08-01

    Full Text Available The objectives of the study are to integrate the conditional Latin Hypercube Sampling (cLHS, sequential Gaussian simulation (SGS and spatial analysis in remotely sensed images, to monitor the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial heterogeneity and variability. The multiple NDVI images demonstrate that spatial patterns of disturbed landscapes were successfully delineated by spatial analysis such as variogram, Moran’I and landscape metrics in the study area. The hybrid method delineates the spatial patterns and spatial variability of landscapes caused by these large disturbances. The cLHS approach is applied to select samples from Normalized Difference Vegetation Index (NDVI images from SPOT HRV images in the Chenyulan watershed of Taiwan, and then SGS with sufficient samples is used to generate maps of NDVI images. In final, the NDVI simulated maps are verified using indexes such as the correlation coefficient and mean absolute error (MAE. Therefore, the statistics and spatial structures of multiple NDVI images present a very robust behavior, which advocates the use of the index for the quantification of the landscape spatial patterns and land cover change. In addition, the results transferred by Open Geospatial techniques can be accessed from web-based and end-user applications of the watershed management.

  14. New approaches to phylogenetic tree search and their application to large numbers of protein alignments.

    Science.gov (United States)

    Whelan, Simon

    2007-10-01

    Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.

  15. Methodology for Quantitative Analysis of Large Liquid Samples with Prompt Gamma Neutron Activation Analysis using Am-Be Source

    International Nuclear Information System (INIS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.

    2009-01-01

    An optimized set-up for prompt gamma neutron activation analysis (PGNAA) with Am-Be source is described and used for large liquid samples analysis. A methodology for quantitative analysis is proposed: it consists on normalizing the prompt gamma count rates with thermal neutron flux measurements carried out with He-3 detector and gamma attenuation factors calculated using MCNP-5. The relative and absolute methods are considered. This methodology is then applied to the determination of cadmium in industrial phosphoric acid. The same sample is then analyzed by inductively coupled plasma (ICP) method. Our results are in good agreement with those obtained with ICP method.

  16. Subrandom methods for multidimensional nonuniform sampling.

    Science.gov (United States)

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Testing AGN unification via inference from large catalogs

    Science.gov (United States)

    Nikutta, Robert; Ivezic, Zeljko; Elitzur, Moshe; Nenkova, Maia

    2018-01-01

    Source orientation and clumpiness of the central dust are the main factors in AGN classification. Type-1 QSOs are easy to observe and large samples are available (e.g. in SDSS), but obscured type-2 AGN are dimmer and redder as our line of sight is more obscured, making it difficult to obtain a complete sample. WISE has found up to a million QSOs. With only 4 bands and a relatively small aperture the analysis of individual sources is challenging, but the large sample allows inference of bulk properties at a very significant level.CLUMPY (www.clumpy.org) is arguably the most popular database of AGN torus SEDs. We model the ensemble properties of the entire WISE AGN content using regularized linear regression, with orientation-dependent CLUMPY color-color-magnitude (CCM) tracks as basis functions. We can reproduce the observed number counts per CCM bin with percent-level accuracy, and simultaneously infer the probability distributions of all torus parameters, redshifts, additional SED components, and identify type-1/2 AGN populations through their IR properties alone. We increase the statistical power of our AGN unification tests even further, by adding other datasets as axes in the regression problem. To this end, we make use of the NOAO Data Lab (datalab.noao.edu), which hosts several high-level large datasets and provides very powerful tools for handling large data, e.g. cross-matched catalogs, fast remote queries, etc.

  18. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  19. Summary of micrographic analysis of selected core samples from Well ER-20-6n number 1 in support of matrix diffusion testing

    International Nuclear Information System (INIS)

    1998-01-01

    ER-20-6number s ign1 was cored to determine fracture and lithologic properties proximal to the BULLION test cavity. Selected samples from ER-20-6number s ign1 were subjected to matrix and/or fracture diffusion experiments to assess solute movement in this environment. Micrographic analysis of these samples suggests that the similarity in bulk chemical composition results in very similar mineral assemblages forming along natural fractures. These samples are all part of the mafic-poor Calico Hills Formation and exhibit fracture-coating mineral assemblages dominated by mixed illite/smectite clay and illite, with local opaline silica (2,236 and 2, 812 feet), and zeolite (at 2,236 feet). Based on this small sample population, the magnitude to which secondary phases have formed on fracture surfaces bears an apparently inverse relationship to the competency of the host lithology, reflected by variations in the degree of fracturing and the development of secondary phases on fracture surfaces. In the flow breccia at 2,851 feet, thinly developed, localized coatings are developed along persistent open fracture apertures in this competent rock type. Fractures in the devitrified lava from 2,812 feet are irregular, and locally blocked by secondary mineral phases. Natural fractures on the zeolitized tuff from 2,236 feet are discontinuous and irregular and typically obstructed with secondary mineral phases. There are also a second set of clean fractures in the 2,236 foot sample which lack secondary mineral phases and are interpreted to have been induced by the BULLION test. Based on these results, it is expected that matrix diffusion will be enhanced in samples where potentially transmissive fractures exhibit the greatest degree of obstruction (2,236>2,812=2,835>2,851). It is unclear what influence the induced fractures observed at 2,236 feet may have on diffusion given the lack of knowledge on their extent. It is assumed that the bulk matrix diffusion characteristics of the

  20. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  1. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  2. Turbulent flows at very large Reynolds numbers: new lessons learned

    International Nuclear Information System (INIS)

    Barenblatt, G I; Prostokishin, V M; Chorin, A J

    2014-01-01

    The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)

  3. Gasoline prices, gasoline consumption, and new-vehicle fuel economy: Evidence for a large sample of countries

    International Nuclear Information System (INIS)

    Burke, Paul J.; Nishitateno, Shuhei

    2013-01-01

    Countries differ considerably in terms of the price drivers pay for gasoline. This paper uses data for 132 countries for the period 1995–2008 to investigate the implications of these differences for the consumption of gasoline for road transport. To address the potential for simultaneity bias, we use both a country's oil reserves and the international crude oil price as instruments for a country's average gasoline pump price. We obtain estimates of the long-run price elasticity of gasoline demand of between − 0.2 and − 0.5. Using newly available data for a sub-sample of 43 countries, we also find that higher gasoline prices induce consumers to substitute to vehicles that are more fuel-efficient, with an estimated elasticity of + 0.2. Despite the small size of our elasticity estimates, there is considerable scope for low-price countries to achieve gasoline savings and vehicle fuel economy improvements via reducing gasoline subsidies and/or increasing gasoline taxes. - Highlights: ► We estimate the determinants of gasoline demand and new-vehicle fuel economy. ► Estimates are for a large sample of countries for the period 1995–2008. ► We instrument for gasoline prices using oil reserves and the world crude oil price. ► Gasoline demand and fuel economy are inelastic with respect to the gasoline price. ► Large energy efficiency gains are possible via higher gasoline prices

  4. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  5. The development of neutron activation, sample transportation and γ-ray counting routine system for numbers of geological samples

    International Nuclear Information System (INIS)

    Shibata Shin-nosuke; Tanaka, Tsuyoshi; Minami, Masayo

    2001-01-01

    A new gamma-ray counting and data processing system for non-destructive neutron activation analysis has been set up in Radioisotope Center in Nagoya University. The system carry out gamma-ray counting, sample change and data processing automatically, and is able to keep us away from parts of complicated operations in INAA. In this study, we have arranged simple analytical procedure that makes practical works easier than previous. The concrete flow is described from the reparation of powder rock samples to gamma-ray counting and data processing by the new INAA system. Then it is run over that the analyses used two Geological Survey of Japan rock reference samples JB-1a and JG-1a in order to evaluate how the new analytical procedure give any speediness and accuracy for analyses of geological materials. Two United States Geological Survey reference samples BCR-1 and G-2 used as the standard respectively. Twenty two elements for JB-1a and 25 elements for JG-1a were analyzed, the uncertainty are <5% for Na, Sc, Fe, Co, La, Ce, Sm, Eu, Yb, Lu, Hf, Ta and Th, and of <10% for Cr, Zn, Cs, Ba, Nd, Tb and U. This system will enable us to analyze more than 1500 geologic samples per year. (author)

  6. Heritability of psoriasis in a large twin sample

    DEFF Research Database (Denmark)

    Lønnberg, Ann Sophie; Skov, Liselotte; Skytthe, A

    2013-01-01

    AIM: To study the concordance of psoriasis in a population-based twin sample. METHODS: Data on psoriasis in 10,725 twin pairs, 20-71 years of age, from the Danish Twin Registry was collected via a questionnaire survey. The concordance and heritability of psoriasis were estimated. RESULTS: In total...

  7. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Analysis of biological samples by x-ray attenuation measurements

    International Nuclear Information System (INIS)

    Cesareo, R.

    1988-01-01

    Over the last few years there has been an increasing interest in X-ray attenuation measurements, mainly due to the enormous development of computer assisted tomography (CAT). With CAT, analytical information concerning the density and the mean atomic number distributions in a sample is deduced from a large number of attenuation measurements. Particular transmission methods developed, based on the differential attenuation method are discussed. The theoretical background for attenuation of radiation and for differential attenuation of radiation is given. Details about the generation of monoenergetic X-rays are discussed. Applications of attenuation measurements in the field of Medicine are presented

  9. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  10. Oxalic acid as a liquid dosimeter for absorbed dose measurement in large-scale of sample solution

    International Nuclear Information System (INIS)

    Biramontri, S.; Dechburam, S.; Vitittheeranon, A.; Wanitsuksombut, W.; Thongmitr, W.

    1999-01-01

    This study shows the feasibility for, applying 2.5 mM aqueous oxalic acid solution using spectrophotometric analysis method for absorbed dose measurement from 1 to 10 kGy in a large-scale of sample solution. The optimum wavelength of 220 nm was selected. The stability of the response of the dosimeter over 25 days was better than 1 % for unirradiated and ± 2% for irradiated solution. The reproducibility in the same batch was within 1%. The variation of the dosimeter response between batches was also studied. (author)

  11. Large-D gravity and low-D strings.

    Science.gov (United States)

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  12. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study.

    Science.gov (United States)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-04-11

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population-based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  13. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  14. Rare behavior of growth processes via umbrella sampling of trajectories

    Science.gov (United States)

    Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen

    2018-03-01

    We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.

  15. Classical boson sampling algorithms with superior performance to near-term experiments

    Science.gov (United States)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  16. Study of the relationship between peaks scattering Rayleigh to Compton ratio and effective atomic number in biological samples

    International Nuclear Information System (INIS)

    Pereira, Marcelo O.; Conti, Claudio de Carvalho; Anjos, Marcelino J.; Lopes, Ricardo T.

    2011-01-01

    The aim of this work was to develop a new method to correct the absorbed radiation (the mass attenuation coefficient curve) in low energy (E B O 3 , Na 2 CO 3 , CaCO 3 , Al 2 O 3 , K 2 SO 4 and MgO) of radiation produced by a gamma-ray source of Am-241(59.54 keV) also applied to certified biological samples of milk powder, hay powder and bovine liver (NIST 155 7B). In addition, six methods of effective atomic number determination were used as described in literature to determinate the Rayleigh to Compton scattering ratio (R/C) , in order to calculate the mass attenuation coefficient. The results obtained by the proposed method were compared with those obtained using the transmission method. The experimental results were in good agreement with transmission values suggesting that the method to correct radiation absorption presented in this paper is adequate for biological samples. (author)

  17. Rules of attraction: The role of bait in small mammal sampling at ...

    African Journals Online (AJOL)

    Baits or lures are commonly used for surveying small mammal communities, not only because they attract large numbers of these animals, but also because they provide sustenance for trapped individuals. In this study we used Sherman live traps with five bait treatments to sample small mammal populations at three ...

  18. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    International Nuclear Information System (INIS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-01-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)

  19. Interaction between numbers and size during visual search

    OpenAIRE

    Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver

    2016-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numeric...

  20. Implicit and explicit anti-fat bias among a large sample of medical doctors by BMI, race/ethnicity and gender.

    Directory of Open Access Journals (Sweden)

    Janice A Sabin

    Full Text Available Overweight patients report weight discrimination in health care settings and subsequent avoidance of routine preventive health care. The purpose of this study was to examine implicit and explicit attitudes about weight among a large group of medical doctors (MDs to determine the pervasiveness of negative attitudes about weight among MDs. Test-takers voluntarily accessed a public Web site, known as Project Implicit®, and opted to complete the Weight Implicit Association Test (IAT (N = 359,261. A sub-sample identified their highest level of education as MD (N = 2,284. Among the MDs, 55% were female, 78% reported their race as white, and 62% had a normal range BMI. This large sample of test-takers showed strong implicit anti-fat bias (Cohen's d = 1.0. MDs, on average, also showed strong implicit anti-fat bias (Cohen's d = 0.93. All test-takers and the MD sub-sample reported a strong preference for thin people rather than fat people or a strong explicit anti-fat bias. We conclude that strong implicit and explicit anti-fat bias is as pervasive among MDs as it is among the general public. An important area for future research is to investigate the association between providers' implicit and explicit attitudes about weight, patient reports of weight discrimination in health care, and quality of care delivered to overweight patients.

  1. Sample tracking in an automated cytogenetic biodosimetry laboratory for radiation mass casualties

    International Nuclear Information System (INIS)

    Martin, P.R.; Berdychevski, R.E.; Subramanian, U.; Blakely, W.F.; Prasanna, P.G.S.

    2007-01-01

    Chromosome-aberration-based dicentric assay is expected to be used after mass-casualty life-threatening radiation exposures to assess radiation dose to individuals. This will require processing of a large number of samples for individual dose assessment and clinical triage to aid treatment decisions. We have established an automated, high-throughput, cytogenetic biodosimetry laboratory to process a large number of samples for conducting the dicentric assay using peripheral blood from exposed individuals according to internationally accepted laboratory protocols (i.e., within days following radiation exposures). The components of an automated cytogenetic biodosimetry laboratory include blood collection kits for sample shipment, a cell viability analyzer, a robotic liquid handler, an automated metaphase harvester, a metaphase spreader, high-throughput slide stainer and coverslipper, a high-throughput metaphase finder, multiple satellite chromosome-aberration analysis systems, and a computerized sample-tracking system. Laboratory automation using commercially available, off-the-shelf technologies, customized technology integration, and implementation of a laboratory information management system (LIMS) for cytogenetic analysis will significantly increase throughput. This paper focuses on our efforts to eliminate data-transcription errors, increase efficiency, and maintain samples' positive chain-of-custody by sample tracking during sample processing and data analysis. This sample-tracking system represents a 'beta' version, which can be modeled elsewhere in a cytogenetic biodosimetry laboratory, and includes a customized LIMS with a central server, personal computer workstations, barcode printers, fixed station and wireless hand-held devices to scan barcodes at various critical steps, and data transmission over a private intra-laboratory computer network. Our studies will improve diagnostic biodosimetry response, aid confirmation of clinical triage, and medical

  2. Sample tracking in an automated cytogenetic biodosimetry laboratory for radiation mass casualties

    Energy Technology Data Exchange (ETDEWEB)

    Martin, P.R.; Berdychevski, R.E.; Subramanian, U.; Blakely, W.F. [Armed Forces Radiobiology Research Institute, Uniformed Services University of Health Sciences, 8901 Wisconsin Avenue, Bethesda, MD 20889-5603 (United States); Prasanna, P.G.S. [Armed Forces Radiobiology Research Institute, Uniformed Services University of Health Sciences, 8901 Wisconsin Avenue, Bethesda, MD 20889-5603 (United States)], E-mail: prasanna@afrri.usuhs.mil

    2007-07-15

    Chromosome-aberration-based dicentric assay is expected to be used after mass-casualty life-threatening radiation exposures to assess radiation dose to individuals. This will require processing of a large number of samples for individual dose assessment and clinical triage to aid treatment decisions. We have established an automated, high-throughput, cytogenetic biodosimetry laboratory to process a large number of samples for conducting the dicentric assay using peripheral blood from exposed individuals according to internationally accepted laboratory protocols (i.e., within days following radiation exposures). The components of an automated cytogenetic biodosimetry laboratory include blood collection kits for sample shipment, a cell viability analyzer, a robotic liquid handler, an automated metaphase harvester, a metaphase spreader, high-throughput slide stainer and coverslipper, a high-throughput metaphase finder, multiple satellite chromosome-aberration analysis systems, and a computerized sample-tracking system. Laboratory automation using commercially available, off-the-shelf technologies, customized technology integration, and implementation of a laboratory information management system (LIMS) for cytogenetic analysis will significantly increase throughput. This paper focuses on our efforts to eliminate data-transcription errors, increase efficiency, and maintain samples' positive chain-of-custody by sample tracking during sample processing and data analysis. This sample-tracking system represents a 'beta' version, which can be modeled elsewhere in a cytogenetic biodosimetry laboratory, and includes a customized LIMS with a central server, personal computer workstations, barcode printers, fixed station and wireless hand-held devices to scan barcodes at various critical steps, and data transmission over a private intra-laboratory computer network. Our studies will improve diagnostic biodosimetry response, aid confirmation of clinical triage, and

  3. Differences in microbial community composition between injection and production water samples of water flooding petroleum reservoirs

    Directory of Open Access Journals (Sweden)

    P. K. Gao

    2015-06-01

    Full Text Available Microbial communities in injected water are expected to have significant influence on those of reservoir strata in long-term water flooding petroleum reservoirs. To investigate the similarities and differences in microbial communities in injected water and reservoir strata, high-throughput sequencing of microbial partial 16S rRNA of the water samples collected from the wellhead and downhole of injection wells, and from production wells in a homogeneous sandstone reservoir and a heterogeneous conglomerate reservoir were performed. The results indicate that a small number of microbial populations are shared between the water samples from the injection and production wells in the sandstone reservoir, whereas a large number of microbial populations are shared in the conglomerate reservoir. The bacterial and archaeal communities in the reservoir strata have high concentrations, which are similar to those in the injected water. However, microbial population abundance exhibited large differences between the water samples from the injection and production wells. The number of shared populations reflects the influence of microbial communities in injected water on those in reservoir strata to some extent, and show strong association with the unique variation of reservoir environments.

  4. Lack of association between digit ratio (2D:4D) and assertiveness: replication in a large sample.

    Science.gov (United States)

    Voracek, Martin

    2009-12-01

    Findings regarding within-sex associations of digit ratio (2D:4D), a putative pointer to long-lasting effects of prenatal androgen action, and sexually differentiated personality traits have generally been inconsistent or unreplicable, suggesting that effects in this domain, if any, are likely small. In contrast to evidence from Wilson's important 1983 study, a forerunner of modern 2D:4D research, two recent studies in 2005 and 2008 by Freeman, et al. and Hampson, et al. showed assertiveness, a presumably male-typed personality trait, was not associated with 2D:4D; however, these studies were clearly statistically underpowered. Hence this study examined this question anew, based on a large sample of 491 men and 627 women. Assertiveness was only modestly sexually differentiated, favoring men, and a positive correlate of age and education and a negative correlate of weight and Body Mass Index among women, but not men. Replicating the two prior studies, 2D:4D was throughout unrelated to assertiveness scores. This null finding was preserved with controls for correlates of assertiveness, also in nonparametric analysis and with tests for curvilinear relations. Discussed are implications of this specific null finding, now replicated in a large sample, for studies of 2D:4D and personality in general and novel research approaches to proceed in this field.

  5. Large scale mapping of groundwater resources using a highly integrated set of tools

    DEFF Research Database (Denmark)

    Søndergaard, Verner; Auken, Esben; Christiansen, Anders Vest

    large areas with information from an optimum number of new investigation boreholes, existing boreholes, logs and water samples to get an integrated and detailed description of the groundwater resources and their vulnerability.Development of more time efficient and airborne geophysical data acquisition...... platforms (e.g. SkyTEM) have made large-scale mapping attractive and affordable in the planning and administration of groundwater resources. The handling and optimized use of huge amounts of geophysical data covering large areas has also required a comprehensive database, where data can easily be stored...

  6. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  7. Comprehensive metabolic characterization of serum osteocalcin action in a large non-diabetic sample.

    Directory of Open Access Journals (Sweden)

    Lukas Entenmann

    Full Text Available Recent research suggested a metabolic implication of osteocalcin (OCN in e.g. insulin sensitivity or steroid production. We used an untargeted metabolomics approach by analyzing plasma and urine samples of 931 participants using mass spectrometry to reveal further metabolic actions of OCN. Several detected relations between OCN and metabolites were strongly linked to renal function, however, a number of associations remained significant after adjustment for renal function. Intermediates of proline catabolism were associated with OCN reflecting the implication in bone metabolism. The association to kynurenine points towards a pro-inflammatory state with increasing OCN. Inverse relations with intermediates of branch-chained amino acid metabolism suggest a link to energy metabolism. Finally, urinary surrogate markers of smoking highlight its adverse effect on OCN metabolism. In conclusion, the present study provides a read-out of metabolic actions of OCN. However, most of the associations were weak arguing for a limited role of OCN in whole-body metabolism.

  8. Comprehensive metabolic characterization of serum osteocalcin action in a large non-diabetic sample.

    Science.gov (United States)

    Entenmann, Lukas; Pietzner, Maik; Artati, Anna; Hannemann, Anke; Henning, Ann-Kristin; Kastenmüller, Gabi; Völzke, Henry; Nauck, Matthias; Adamski, Jerzy; Wallaschofski, Henri; Friedrich, Nele

    2017-01-01

    Recent research suggested a metabolic implication of osteocalcin (OCN) in e.g. insulin sensitivity or steroid production. We used an untargeted metabolomics approach by analyzing plasma and urine samples of 931 participants using mass spectrometry to reveal further metabolic actions of OCN. Several detected relations between OCN and metabolites were strongly linked to renal function, however, a number of associations remained significant after adjustment for renal function. Intermediates of proline catabolism were associated with OCN reflecting the implication in bone metabolism. The association to kynurenine points towards a pro-inflammatory state with increasing OCN. Inverse relations with intermediates of branch-chained amino acid metabolism suggest a link to energy metabolism. Finally, urinary surrogate markers of smoking highlight its adverse effect on OCN metabolism. In conclusion, the present study provides a read-out of metabolic actions of OCN. However, most of the associations were weak arguing for a limited role of OCN in whole-body metabolism.

  9. A large-scale investigation of the quality of groundwater in six major districts of Central India during the 2010-2011 sampling campaign.

    Science.gov (United States)

    Khare, Peeyush

    2017-09-01

    This paper investigates the groundwater quality in six major districts of Madhya Pradesh in central India, namely, Balaghat, Chhindwara, Dhar, Jhabua, Mandla, and Seoni during the 2010-2011 sampling campaign, and discusses improvements made in the supplied water quality between the years 2011 and 2017. Groundwater is the main source of water for a combined rural population of over 7 million in these districts. Its contamination could have a huge impact on public health. We analyzed the data collected from a large-scale water sampling campaign carried out by the Public Health Engineering Department (PHED), Government of Madhya Pradesh between 2010 and 2011 during which all rural tube wells and dug wells were sampled in these six districts. Eight hundred thirty-one dug wells and 47,606 tube wells were sampled in total and were analyzed for turbidity, hardness, iron, nitrate, fluoride, chloride, and sulfate ion concentrations. Our study found water in 21 out of the 228 dug wells in Chhindwara district unfit for drinking due to fluoride contamination while all dug wells in Balaghat had fluoride within the permissible limit. Twenty-six of the 56 dug wells and 4825 of the 9390 tube wells in Dhar district exceeded the permissible limit for nitrate while 100% dug wells in Balaghat, Seoni, and Chhindwara had low levels of nitrate. Twenty-four of the 228 dug wells and 1669 of 6790 tube wells in Chhindwara had high iron concentration. The median pH value in both dug wells and tube wells varied between 6 and 8 in all six districts. Still, a significant number of tube wells exceeded a pH of 8.5 especially in Mandla and Seoni districts. In conclusion, this study shows that parts of inhabited rural Madhya Pradesh were potentially exposed to contaminated subsurface water during 2010-2011. The analysis has been correlated with rural health survey results wherever available to estimate the visible impact. We next highlight that the quality of drinking water has enormously improved

  10. A high-throughput sample preparation method for cellular proteomics using 96-well filter plates.

    Science.gov (United States)

    Switzar, Linda; van Angeren, Jordy; Pinkse, Martijn; Kool, Jeroen; Niessen, Wilfried M A

    2013-10-01

    A high-throughput sample preparation protocol based on the use of 96-well molecular weight cutoff (MWCO) filter plates was developed for shotgun proteomics of cell lysates. All sample preparation steps, including cell lysis, buffer exchange, protein denaturation, reduction, alkylation and proteolytic digestion are performed in a 96-well plate format, making the platform extremely well suited for processing large numbers of samples and directly compatible with functional assays for cellular proteomics. In addition, the usage of a single plate for all sample preparation steps following cell lysis reduces potential samples losses and allows for automation. The MWCO filter also enables sample concentration, thereby increasing the overall sensitivity, and implementation of washing steps involving organic solvents, for example, to remove cell membranes constituents. The optimized protocol allowed for higher throughput with improved sensitivity in terms of the number of identified cellular proteins when compared to an established protocol employing gel-filtration columns. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Sample Length Affects the Reliability of Language Sample Measures in 3-Year-Olds: Evidence from Parent-Elicited Conversational Samples

    Science.gov (United States)

    Guo, Ling-Yu; Eisenberg, Sarita

    2015-01-01

    Purpose: The goal of this study was to investigate the extent to which sample length affected the reliability of total number of words (TNW), number of different words (NDW), and mean length of C-units in morphemes (MLCUm) in parent-elicited conversational samples for 3-year-olds. Method: Participants were sixty 3-year-olds. A 22-min language…

  12. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  13. One Sample, One Shot - Evaluation of sample preparation protocols for the mass spectrometric proteome analysis of human bile fluid without extensive fractionation.

    Science.gov (United States)

    Megger, Dominik A; Padden, Juliet; Rosowski, Kristin; Uszkoreit, Julian; Bracht, Thilo; Eisenacher, Martin; Gerges, Christian; Neuhaus, Horst; Schumacher, Brigitte; Schlaak, Jörg F; Sitek, Barbara

    2017-02-10

    The proteome analysis of bile fluid represents a promising strategy to identify biomarker candidates for various diseases of the hepatobiliary system. However, to obtain substantive results in biomarker discovery studies large patient cohorts necessarily need to be analyzed. Consequently, this would lead to an unmanageable number of samples to be analyzed if sample preparation protocols with extensive fractionation methods are applied. Hence, the performance of simple workflows allowing for "one sample, one shot" experiments have been evaluated in this study. In detail, sixteen different protocols implying modifications at the stages of desalting, delipidation, deglycosylation and tryptic digestion have been examined. Each method has been individually evaluated regarding various performance criteria and comparative analyses have been conducted to uncover possible complementarities. Here, the best performance in terms of proteome coverage has been assessed for a combination of acetone precipitation with in-gel digestion. Finally, a mapping of all obtained protein identifications with putative biomarkers for hepatocellular carcinoma (HCC) and cholangiocellular carcinoma (CCC) revealed several proteins easily detectable in bile fluid. These results can build the basis for future studies with large and well-defined patient cohorts in a more disease-related context. Human bile fluid is a proximal body fluid and supposed to be a potential source of disease markers. However, due to its biochemical composition, the proteome analysis of bile fluid still represents a challenging task and is therefore mostly conducted using extensive fractionation procedures. This in turn leads to a high number of mass spectrometric measurements for one biological sample. Considering the fact that in order to overcome the biological variability a high number of biological samples needs to be analyzed in biomarker discovery studies, this leads to the dilemma of an unmanageable number of

  14. Optimizing liquid effluent monitoring at a large nuclear complex.

    Science.gov (United States)

    Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M

    2003-12-01

    Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.

  15. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  16. Geosamples.org: Shared Cyberinfrastructure for Geoscience Samples

    Science.gov (United States)

    Lehnert, Kerstin; Allison, Lee; Arctur, David; Klump, Jens; Lenhardt, Christopher

    2014-05-01

    Many scientific domains, specifically in the geosciences, rely on physical samples as basic elements for study and experimentation. Samples are collected to analyze properties of natural materials and features that are key to our knowledge of Earth's dynamical systems and evolution, and to preserve a record of our environment over time. Huge volumes of samples have been acquired over decades or even centuries and stored in a large number and variety of institutions including museums, universities and colleges, state geological surveys, federal agencies, and industry. All of these collections represent highly valuable, often irreplaceable records of nature that need to be accessible so that they can be re-used in future research and for educational purposes. Many sample repositories are keen to use cyberinfrastructure capabilities to enhance access to their collections on the internet and to support and streamline collection management (accessioning of new samples, labeling, handling sample requests, etc.), but encounter substantial challenges and barriers to integrate digital sample management into their daily routine. They lack the resources (staff, funding) and infrastructure (hardware, software, IT support) to develop and operate web-enabled databases, to migrate analog sample records into digital data management systems, and to transfer paper- or spreadsheet-based workflows to electronic systems. Use of commercial software is often not an option as it incurs high costs for licenses, requires IT expertise for installation and maintenance, and often does not match the needs of the smaller repositories, being designed for large museums or different types of collections (art, archeological, biological). Geosamples.org is an alliance of sample repositories (academic, US federal and state surveys, industry) and data facilities that aims to develop a cyberinfrastructure that will dramatically advance access to physical samples for the research community, government

  17. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  18. Tissue Sampling Guides for Porcine Biomedical Models.

    Science.gov (United States)

    Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas

    2016-04-01

    This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results. © The Author(s) 2016.

  19. An examination of smoking behavior and opinions about smoke-free environments in a large sample of sexual and gender minority community members.

    Science.gov (United States)

    McElroy, Jane A; Everett, Kevin D; Zaniletti, Isabella

    2011-06-01

    The purpose of this study is to more completely quantify smoking rate and support for smoke-free policies in private and public environments from a large sample of self-identified sexual and gender minority (SGM) populations. A targeted sampling strategy recruited participants from 4 Missouri Pride Festivals and online surveys targeted to SGM populations during the summer of 2008. A 24-item survey gathered information on gender and sexual orientation, smoking status, and questions assessing behaviors and preferences related to smoke-free policies. The project recruited participants through Pride Festivals (n = 2,676) and Web-based surveys (n = 231) representing numerous sexual and gender orientations and the racial composite of the state of Missouri. Differences were found between the Pride Festivals sample and the Web-based sample, including smoking rates, with current smoking for the Web-based sample (22%) significantly less than the Pride Festivals sample (37%; p times more likely to be current smokers compared with the study's heterosexual group (n = 436; p = .005). Statistically fewer SGM racial minorities (33%) are current smokers compared with SGM Whites (37%; p = .04). Support and preferences for public and private smoke-free environments were generally low in the SGM population. The strategic targeting method achieved a large and diverse sample. The findings of high rates of smoking coupled with generally low levels of support for smoke-free public policies in the SGM community highlight the need for additional research to inform programmatic attempts to reduce tobacco use and increase support for smoke-free environments.

  20. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  1. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  2. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  3. CASP10-BCL::Fold efficiently samples topologies of large proteins.

    Science.gov (United States)

    Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens

    2015-03-01

    During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.

  4. Detection of large numbers of novel sequences in the metatranscriptomes of complex marine microbial communities.

    Science.gov (United States)

    Gilbert, Jack A; Field, Dawn; Huang, Ying; Edwards, Rob; Li, Weizhong; Gilna, Paul; Joint, Ian

    2008-08-22

    Sequencing the expressed genetic information of an ecosystem (metatranscriptome) can provide information about the response of organisms to varying environmental conditions. Until recently, metatranscriptomics has been limited to microarray technology and random cloning methodologies. The application of high-throughput sequencing technology is now enabling access to both known and previously unknown transcripts in natural communities. We present a study of a complex marine metatranscriptome obtained from random whole-community mRNA using the GS-FLX Pyrosequencing technology. Eight samples, four DNA and four mRNA, were processed from two time points in a controlled coastal ocean mesocosm study (Bergen, Norway) involving an induced phytoplankton bloom producing a total of 323,161,989 base pairs. Our study confirms the finding of the first published metatranscriptomic studies of marine and soil environments that metatranscriptomics targets highly expressed sequences which are frequently novel. Our alternative methodology increases the range of experimental options available for conducting such studies and is characterized by an exceptional enrichment of mRNA (99.92%) versus ribosomal RNA. Analysis of corresponding metagenomes confirms much higher levels of assembly in the metatranscriptomic samples and a far higher yield of large gene families with >100 members, approximately 91% of which were novel. This study provides further evidence that metatranscriptomic studies of natural microbial communities are not only feasible, but when paired with metagenomic data sets, offer an unprecedented opportunity to explore both structure and function of microbial communities--if we can overcome the challenges of elucidating the functions of so many never-seen-before gene families.

  5. Theory of sampling. A mini seminar under the NKS project SAMPSTRAT

    International Nuclear Information System (INIS)

    Holm, E.; Oestergaard, L.F.; Sidhu, R.

    2006-04-01

    At an emergency situation a large number of matrixes can be contaminated and samples of these different matrixes will be collected. These sample matrixes might be or often certainly are heterogeneous and in general more unevenly distributed than from nuclear test fallout or even the Chernobyl accident. On basis of the reported data conclusions and remedial actions causing social and economical costs for the society are taken. Therefore the number of samples from each site, their size and further homogenizations is of great importance. In the case of an emergency situation the activities are generally high and the errors due to counting statistics are small. We could also imagine a situation when a certain nuclear enterprise/activity has to close down or being prosecuted, based on sampling and analysis, for not following directives of discarding radioactivity in the environment. We therefore organized a seminar focusing on the above mentioned problems. The seminar covered several important topics such as an introduction to Theory of sampling (TOS), Lot heterogeneity and sampling in practice, Statistics for sampling in analytical chemistry, Representative mass reduction in sampling. Case studies were presented such as Sampling of heterogeneous bottom ash from municipal waste-incineration plants and Sampling and inventories at Thule Greenland, which also illustrated the difficulties with Plutonium Inventory Calculations in Sediments when Hot Particles were present. (au)

  6. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    Science.gov (United States)

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling

  7. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

  8. Toward Rapid Unattended X-ray Tomography of Large Planar Samples at 50-nm Resolution

    International Nuclear Information System (INIS)

    Rudati, J.; Tkachuk, A.; Gelb, J.; Hsu, G.; Feng, Y.; Pastrick, R.; Lyon, A.; Trapp, D.; Beetz, T.; Chen, S.; Hornberger, B.; Seshadri, S.; Kamath, S.; Zeng, X.; Feser, M.; Yun, W.; Pianetta, P.; Andrews, J.; Brennan, S.; Chu, Y. S.

    2009-01-01

    X-ray tomography at sub-50 nm resolution of small areas (∼15 μmx15 μm) are routinely performed with both laboratory and synchrotron sources. Optics and detectors for laboratory systems have been optimized to approach the theoretical efficiency limit. Limited by the availability of relatively low-brightness laboratory X-ray sources, exposure times for 3-D data sets at 50 nm resolution are still many hours up to a full day. However, for bright synchrotron sources, the use of these optimized imaging systems results in extremely short exposure times, approaching live-camera speeds at the Advanced Photon Source at Argonne National Laboratory near Chicago in the US These speeds make it possible to acquire a full tomographic dataset at 50 nm resolution in less than a minute of true X-ray exposure time. However, limits in the control and positioning system lead to large overhead that results in typical exposure times of ∼15 min currently.We present our work on the reduction and elimination of system overhead and toward complete automation of the data acquisition process. The enhancements underway are primarily to boost the scanning rate, sample positioning speed, and illumination homogeneity to performance levels necessary for unattended tomography of large areas (many mm 2 in size). We present first results on this ongoing project.

  9. Fluctuations of nuclear cross sections in the region of strong overlapping resonances and at large number of open channels

    International Nuclear Information System (INIS)

    Kun, S.Yu.

    1985-01-01

    On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed

  10. RNA-seq: technical variability and sampling

    Science.gov (United States)

    2011-01-01

    Background RNA-seq is revolutionizing the way we study transcriptomes. mRNA can be surveyed without prior knowledge of gene transcripts. Alternative splicing of transcript isoforms and the identification of previously unknown exons are being reported. Initial reports of differences in exon usage, and splicing between samples as well as quantitative differences among samples are beginning to surface. Biological variation has been reported to be larger than technical variation. In addition, technical variation has been reported to be in line with expectations due to random sampling. However, strategies for dealing with technical variation will differ depending on the magnitude. The size of technical variance, and the role of sampling are examined in this manuscript. Results In this study three independent Solexa/Illumina experiments containing technical replicates are analyzed. When coverage is low, large disagreements between technical replicates are apparent. Exon detection between technical replicates is highly variable when the coverage is less than 5 reads per nucleotide and estimates of gene expression are more likely to disagree when coverage is low. Although large disagreements in the estimates of expression are observed at all levels of coverage. Conclusions Technical variability is too high to ignore. Technical variability results in inconsistent detection of exons at low levels of coverage. Further, the estimate of the relative abundance of a transcript can substantially disagree, even when coverage levels are high. This may be due to the low sampling fraction and if so, it will persist as an issue needing to be addressed in experimental design even as the next wave of technology produces larger numbers of reads. We provide practical recommendations for dealing with the technical variability, without dramatic cost increases. PMID:21645359

  11. Low-Reynolds Number Effects in Ventilated Rooms

    DEFF Research Database (Denmark)

    Davidson, Lars; Nielsen, Peter V.; Topp, Claus

    In the present study, we use Large Eddy Simulations (LES) which is a suitable method for simulating the flow in ventilated rooms at low Reynolds number.......In the present study, we use Large Eddy Simulations (LES) which is a suitable method for simulating the flow in ventilated rooms at low Reynolds number....

  12. Forensic Tools to Track and Connect Physical Samples to Related Data

    Science.gov (United States)

    Molineux, A.; Thompson, A. C.; Baumgardner, R. W.

    2016-12-01

    Identifiers, such as local sample numbers, are critical to successfully connecting physical samples and related data. However, identifiers must be globally unique. The International Geo Sample Number (IGSN) generated when registering the sample in the System for Earth Sample Registration (SESAR) provides a globally unique alphanumeric code associated with basic metadata, related samples and their current physical storage location. When registered samples are published, users can link the figured samples to the basic metadata held at SESAR. The use cases we discuss include plant specimens from a Permian core, Holocene corals and derived powders, and thin sections with SEM stubs. Much of this material is now published. The plant taxonomic study from the core is a digital pdf and samples can be directly linked from the captions to the SESAR record. The study of stable isotopes from the corals is not yet digitally available, but individual samples are accessible. Full data and media records for both studies are located in our database where higher quality images, field notes, and section diagrams may exist. Georeferences permit mapping in current and deep time plate configurations. Several aspects emerged during this study. The first, ensure adequate and consistent details are registered with SESAR. Second, educate and encourage the researcher to obtain IGSNs. Third, publish the archive numbers, assigned prior to publication, alongside the IGSN. This provides access to further data through an Integrated Publishing Toolkit (IPT)/aggregators/or online repository databases, thus placing the initial sample in a much richer context for future studies. Fourth, encourage software developers to customize community software to extract data from a database and use it to register samples in bulk. This would improve workflow and provide a path for registration of large legacy collections.

  13. Sample preparation for phosphoproteomic analysis of circadian time series in Arabidopsis thaliana.

    Science.gov (United States)

    Krahmer, Johanna; Hindle, Matthew M; Martin, Sarah F; Le Bihan, Thierry; Millar, Andrew J

    2015-01-01

    Systems biological approaches to study the Arabidopsis thaliana circadian clock have mainly focused on transcriptomics while little is known about the proteome, and even less about posttranslational modifications. Evidence has emerged that posttranslational protein modifications, in particular phosphorylation, play an important role for the clock and its output. Phosphoproteomics is the method of choice for a large-scale approach to gain more knowledge about rhythmic protein phosphorylation. Recent plant phosphoproteomics publications have identified several thousand phosphopeptides. However, the methods used in these studies are very labor-intensive and therefore not suitable to apply to a well-replicated circadian time series. To address this issue, we present and compare different strategies for sample preparation for phosphoproteomics that are compatible with large numbers of samples. Methods are compared regarding number of identifications, variability of quantitation, and functional categorization. We focus on the type of detergent used for protein extraction as well as methods for its removal. We also test a simple two-fraction separation of the protein extract. © 2015 Elsevier Inc. All rights reserved.

  14. A simulative comparison of respondent driven sampling with incentivized snowball sampling – the “strudel effect”

    Science.gov (United States)

    Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.

    2014-01-01

    Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650

  15. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    Science.gov (United States)

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  16. 40 CFR 761.283 - Determination of the number of samples to collect and sample collection locations.

    Science.gov (United States)

    2010-07-01

    ...) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling To Verify Completion of Self... cleanup verification conducted in accordance with § 761.61(a)(6), follow the procedures in paragraph (b... verification conducted in accordance with § 761.61(a)(6), follow the procedures in this section for locating...

  17. An examination of the RCMAS-2 scores across gender, ethnic background, and age in a large Asian school sample.

    Science.gov (United States)

    Ang, Rebecca P; Lowe, Patricia A; Yusof, Noradlin

    2011-12-01

    The present study investigated the factor structure, reliability, convergent and discriminant validity, and U.S. norms of the Revised Children's Manifest Anxiety Scale, Second Edition (RCMAS-2; C. R. Reynolds & B. O. Richmond, 2008a) scores in a Singapore sample of 1,618 school-age children and adolescents. Although there were small statistically significant differences in the average RCMAS-2 T scores found across various demographic groupings, on the whole, the U.S. norms appear adequate for use in the Asian Singapore sample. Results from item bias analyses suggested that biased items detected had small effects and were counterbalanced across gender and ethnicity, and hence, their relative impact on test score variation appears to be minimal. Results of factor analyses on the RCMAS-2 scores supported the presence of a large general anxiety factor, the Total Anxiety factor, and the 5-factor structure found in U.S. samples was replicated. Both the large general anxiety factor and the 5-factor solution were invariant across gender and ethnic background. Internal consistency estimates ranged from adequate to good, and 2-week test-retest reliability estimates were comparable to previous studies. Evidence providing support for convergent and discriminant validity of the RCMAS-2 scores was also found. Taken together, findings provide additional cross-cultural evidence of the appropriateness and usefulness of the RCMAS-2 as a measure of anxiety in Asian Singaporean school-age children and adolescents.

  18. Clauser-Horne-Shimony-Holt versus three-party pseudo-telepathy: on the optimal number of samples in device-independent quantum private query

    Science.gov (United States)

    Basak, Jyotirmoy; Maitra, Subhamoy

    2018-04-01

    In device-independent (DI) paradigm, the trustful assumptions over the devices are removed and CHSH test is performed to check the functionality of the devices toward certifying the security of the protocol. The existing DI protocols consider infinite number of samples from theoretical point of view, though this is not practically implementable. For finite sample analysis of the existing DI protocols, we may also consider strategies for checking device independence other than the CHSH test. In this direction, here we present a comparative analysis between CHSH and three-party Pseudo-telepathy game for the quantum private query protocol in DI paradigm that appeared in Maitra et al. (Phys Rev A 95:042344, 2017) very recently.

  19. Exploitation of immunofluorescence for the quantification and characterization of small numbers of Pasteuria endospores.

    Science.gov (United States)

    Costa, Sofia R; Kerry, Brian R; Bardgett, Richard D; Davies, Keith G

    2006-12-01

    The Pasteuria group of endospore-forming bacteria has been studied as a biocontrol agent of plant-parasitic nematodes. Techniques have been developed for its detection and quantification in soil samples, and these mainly focus on observations of endospore attachment to nematodes. Characterization of Pasteuria populations has recently been performed with DNA-based techniques, which usually require the extraction of large numbers of spores. We describe a simple immunological method for the quantification and characterization of Pasteuria populations. Bayesian statistics were used to determine an extraction efficiency of 43% and a threshold of detection of 210 endospores g(-1) sand. This provided a robust means of estimating numbers of endospores in small-volume samples from a natural system. Based on visual assessment of endospore fluorescence, a quantitative method was developed to characterize endospore populations, which were shown to vary according to their host.

  20. Investigating sex differences in psychological predictors of snack intake among a large representative sample.

    Science.gov (United States)

    Adriaanse, Marieke A; Evers, Catharine; Verhoeven, Aukje A C; de Ridder, Denise T D

    2016-03-01

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of psychological eating-related variables. A community sample was employed to: (i) determine sex differences in (un)healthy snack consumption and psychological eating-related variables (e.g. emotional eating, intention to eat healthily); (ii) examine whether sex predicts energy intake from (un)healthy snacks over and above psychological variables; and (iii) investigate the relationship between psychological variables and snack intake for men and women separately. Snack consumption was assessed with a 7d snack diary; the psychological eating-related variables with questionnaires. Participants were members of an Internet survey panel that is based on a true probability sample of households in the Netherlands. Men and women (n 1292; 45 % male), with a mean age of 51·23 (sd 16·78) years and a mean BMI of 25·62 (sd 4·75) kg/m2. Results revealed that women consumed more healthy and less unhealthy snacks than men and they scored higher than men on emotional and restrained eating. Women also more often reported appearance and health-related concerns about their eating behaviour, but men and women did not differ with regard to external eating or their intentions to eat more healthily. The relationships between psychological eating-related variables and snack intake were similar for men and women, indicating that snack intake is predicted by the same variables for men and women. It is concluded that some small sex differences in psychological eating-related variables exist, but based on the present data there is no need for interventions aimed at promoting healthy eating to target different predictors according to sex.

  1. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  2. Large contribution of human papillomavirus in vaginal neoplastic lesions: a worldwide study in 597 samples.

    Science.gov (United States)

    Alemany, L; Saunier, M; Tinoco, L; Quirós, B; Alvarado-Cabrero, I; Alejo, M; Joura, E A; Maldonado, P; Klaustermeier, J; Salmerón, J; Bergeron, C; Petry, K U; Guimerà, N; Clavero, O; Murillo, R; Clavel, C; Wain, V; Geraets, D T; Jach, R; Cross, P; Carrilho, C; Molina, C; Shin, H R; Mandys, V; Nowakowski, A M; Vidal, A; Lombardi, L; Kitchener, H; Sica, A R; Magaña-León, C; Pawlita, M; Quint, W; Bravo, I G; Muñoz, N; de Sanjosé, S; Bosch, F X

    2014-11-01

    This work describes the human papillomavirus (HPV) prevalence and the HPV type distribution in a large series of vaginal intraepithelial neoplasia (VAIN) grades 2/3 and vaginal cancer worldwide. We analysed 189 VAIN 2/3 and 408 invasive vaginal cancer cases collected from 31 countries from 1986 to 2011. After histopathological evaluation of sectioned formalin-fixed paraffin-embedded samples, HPV DNA detection and typing was performed using the SPF-10/DNA enzyme immunoassay (DEIA)/LiPA25 system (version 1). A subset of 146 vaginal cancers was tested for p16(INK4a) expression, a cellular surrogate marker for HPV transformation. Prevalence ratios were estimated using multivariate Poisson regression with robust variance. HPV DNA was detected in 74% (95% confidence interval (CI): 70-78%) of invasive cancers and in 96% (95% CI: 92-98%) of VAIN 2/3. Among cancers, the highest detection rates were observed in warty-basaloid subtype of squamous cell carcinomas, and in younger ages. Concerning the type-specific distribution, HPV16 was the most frequently type detected in both precancerous and cancerous lesions (59%). p16(INK4a) overexpression was found in 87% of HPV DNA positive vaginal cancer cases. HPV was identified in a large proportion of invasive vaginal cancers and in almost all VAIN 2/3. HPV16 was the most common type detected. A large impact in the reduction of the burden of vaginal neoplastic lesions is expected among vaccinated cohorts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Fast integration using quasi-random numbers

    International Nuclear Information System (INIS)

    Bossert, J.; Feindt, M.; Kerzel, U.

    2006-01-01

    Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples

  4. Fast integration using quasi-random numbers

    Science.gov (United States)

    Bossert, J.; Feindt, M.; Kerzel, U.

    2006-04-01

    Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples.

  5. 99Mo Yield Using Large Sample Mass of MoO3 for Sustainable Production of 99Mo

    Science.gov (United States)

    Tsukada, Kazuaki; Nagai, Yasuki; Hashimoto, Kazuyuki; Kawabata, Masako; Minato, Futoshi; Saeki, Hideya; Motoishi, Shoji; Itoh, Masatoshi

    2018-04-01

    A neutron source from the C(d,n) reaction has the unique capability of producing medical radioisotopes such as 99Mo with a minimum level of radioactive waste. Precise data on the neutron flux are crucial to determine the best conditions for obtaining the maximum yield of 99Mo. The measured yield of 99Mo produced by the 100Mo(n,2n)99Mo reaction from a large sample mass of MoO3 agrees well with the numerical result estimated with the latest neutron data, which are a factor of two larger than the other existing data. This result establishes an important finding for the domestic production of 99Mo: approximately 50% of the demand for 99Mo in Japan could be met using a 100 g 100MoO3 sample mass with a single accelerator of 40 MeV, 2 mA deuteron beams.

  6. How much can the number of jabiru stork (Ciconiidae nests vary due to change of flood extension in a large Neotropical floodplain?

    Directory of Open Access Journals (Sweden)

    Guilherme Mourão

    2010-10-01

    Full Text Available The jabiru stork, Jabiru mycteria (Lichtenstein, 1819, a large, long-legged wading bird occurring in lowland wetlands from southern Mexico to northern Argentina, is considered endangered in a large portion of its distribution range. We conducted aerial surveys to estimate the number of jabiru active nests in the Brazilian Pantanal (140,000 km² in September of 1991-1993, 1998, 2000-2002, and 2004. Corrected densities of active nests were regressed against the annual hydrologic index (AHI, an index of flood extension in the Pantanal based on the water level of the Paraguay River. Annual nest density was a non-linear function of the AHI, modeled by the equation 6.5 · 10-8 · AHI1.99 (corrected r² = 0.72, n = 7. We applied this model to the AHI between 1900 and 2004. The results indicate that the number of jabiru nests may have varied from about 220 in 1971 to more than 23,000 in the nesting season of 1921, and the estimates for our study period (1991 to 2004 averaged about 12,400 nests. Our model indicates that the inter-annual variations in flooding extent can determine dramatic changes in the number of active jabiru nests. Since the jabiru stork responds negatively to drier conditions in the Pantanal, direct human-induced changes in the hydrological patterns, as well as the effects of global climate change, may strongly jeopardize the population in the region.

  7. Coloured Letters and Numbers (CLaN): A reliable factor-analysis based synaesthesia questionnaire

    OpenAIRE

    Rothen Nicolas; Tsakanikos Elias; Meier Beat; Ward Jamie

    2013-01-01

    Synaesthesia is a heterogeneous phenomenon even when considering one particular sub type. The purpose of this study was to design a reliable and valid questionnaire for grapheme colour synaesthesia that captures this heterogeneity. By the means of a large sample of 628 synaesthetes and a factor analysis we created the Coloured Letters and Numbers (CLaN) questionnaire with 16 items loading on 4 different factors (i.e. localisation automaticity/attention deliberate use and longitudinal changes)...

  8. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  9. Number to finger mapping is topological.

    NARCIS (Netherlands)

    Plaisier, M.A.; Smeets, J.B.J.

    2011-01-01

    It has been shown that humans associate fingers with numbers because finger counting strategies interact with numerical judgements. At the same time, there is evidence that there is a relation between number magnitude and space as small to large numbers seem to be represented from left to right. In

  10. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study

    International Nuclear Information System (INIS)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-01-01

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population–based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  11. Asymptotic numbers: Pt.1

    International Nuclear Information System (INIS)

    Todorov, T.D.

    1980-01-01

    The set of asymptotic numbers A as a system of generalized numbers including the system of real numbers R, as well as infinitely small (infinitesimals) and infinitely large numbers, is introduced. The detailed algebraic properties of A, which are unusual as compared with the known algebraic structures, are studied. It is proved that the set of asymptotic numbers A cannot be isomorphically embedded as a subspace in any group, ring or field, but some particular subsets of asymptotic numbers are shown to be groups, rings, and fields. The algebraic operation, additive and multiplicative forms, and the algebraic properties are constructed in an appropriate way. It is shown that the asymptotic numbers give rise to a new type of generalized functions quite analogous to the distributions of Schwartz allowing, however, the operation multiplication. A possible application of these functions to quantum theory is discussed

  12. Computer-based data acquisition system in the Large Coil Test Facility

    International Nuclear Information System (INIS)

    Gould, S.S.; Layman, L.R.; Million, D.L.

    1983-01-01

    The utilization of computers for data acquisition and control is of paramount importance on large-scale fusion experiments because they feature the ability to acquire data from a large number of sensors at various sample rates and provide for flexible data interpretation, presentation, reduction, and analysis. In the Large Coil Test Facility (LCTF) a Digital Equipment Corporation (DEC) PDP-11/60 host computer with the DEC RSX-11M operating system coordinates the activities of five DEC LSI-11/23 front-end processors (FEPs) via direct memory access (DMA) communication links. This provides host control of scheduled data acquisition and FEP event-triggered data collection tasks. Four of the five FEPs have no operating system

  13. Using Co-Occurrence to Evaluate Belief Coherence in a Large Non Clinical Sample

    Science.gov (United States)

    Pechey, Rachel; Halligan, Peter

    2012-01-01

    Much of the recent neuropsychological literature on false beliefs (delusions) has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow “cohere” with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs). The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural). Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian’s coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance. PMID:23155383

  14. Using co-occurrence to evaluate belief coherence in a large non clinical sample.

    Directory of Open Access Journals (Sweden)

    Rachel Pechey

    Full Text Available Much of the recent neuropsychological literature on false beliefs (delusions has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow "cohere" with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs. The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural. Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian's coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance.

  15. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  16. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

    International Nuclear Information System (INIS)

    Mandelli, Diego; Smith, Curtis Lee; Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua Joseph

    2015-01-01

    The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

  17. Improved Sampling Algorithms in the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RISMC approach is developing advanced set of methodologies and algorithms in order to perform Probabilistic Risk Analyses (PRAs). In contrast to classical PRA methods, which are based on Event-Tree and Fault-Tree methods, the RISMC approach largely employs system simulator codes applied to stochastic analysis tools. The basic idea is to randomly perturb (by employing sampling algorithms) timing and sequencing of events and internal parameters of the system codes (i.e., uncertain parameters) in order to estimate stochastic parameters such as core damage probability. This approach applied to complex systems such as nuclear power plants requires to perform a series of computationally expensive simulation runs given a large set of uncertain parameters. These types of analysis are affected by two issues. Firstly, the space of the possible solutions (a.k.a., the issue space or the response surface) can be sampled only very sparsely, and this precludes the ability to fully analyze the impact of uncertainties on the system dynamics. Secondly, large amounts of data are generated and tools to generate knowledge from such data sets are not yet available. This report focuses on the first issue and in particular employs novel methods that optimize the information generated by the sampling process by sampling unexplored and risk-significant regions of the issue space: adaptive (smart) sampling algorithms. They infer system response from surrogate models constructed from existing samples and predict the most relevant location of the next sample. It is therefore possible to understand features of the issue space with a small number of carefully selected samples. In this report, we will present how it is possible to perform adaptive sampling using the RISMC toolkit and highlight the advantages compared to more classical sampling approaches such Monte-Carlo. We will employ RAVEN to perform such statistical analyses using both analytical cases but also another RISMC code: RELAP-7.

  18. Rapid surface sampling and archival record system

    Energy Technology Data Exchange (ETDEWEB)

    Barren, E.; Penney, C.M.; Sheldon, R.B. [GE Corporate Research and Development Center, Schenectady, NY (United States)] [and others

    1995-10-01

    A number of contamination sites exist in this country where the area and volume of material to be remediated is very large, approaching or exceeding 10{sup 6} m{sup 2} and 10{sup 6} m{sup 3}. Typically, only a small fraction of this material is actually contaminated. In such cases there is a strong economic motivation to test the material with a sufficient density of measurements to identify which portions are uncontaminated, so extensively they be left in place or be disposed of as uncontaminated waste. Unfortunately, since contamination often varies rapidly from position to position, this procedure can involve upwards of one million measurements per site. The situation is complicated further in many cases by the difficulties of sampling porous surfaces, such as concrete. This report describes a method for sampling concretes in which an immediate distinction can be made between contaminated and uncontaminated surfaces. Sample acquisition and analysis will be automated.

  19. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  20. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ∼15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    International Nuclear Information System (INIS)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-01-01

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey