WorldWideScience

Sample records for large sample studies

  1. Large Sample Neutron Activation Analysis: A Challenge in Cultural Heritage Studies

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Tzika, F.

    2007-01-01

    Large sample neutron activation analysis compliments and significantly extends the analytical tools available for cultural heritage and authentication studies providing unique applications of non-destructive, multi-element analysis of materials that are too precious to damage for sampling purposes, representative sampling of heterogeneous materials or even analysis of whole objects. In this work, correction factors for neutron self-shielding, gamma-ray attenuation and volume distribution of the activity in large volume samples composed of iron and ceramic material were derived. Moreover, the effect of inhomogeneity on the accuracy of the technique was examined

  2. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  4. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  5. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  6. The problem of large samples. An activation analysis study of electronic waste material

    International Nuclear Information System (INIS)

    Segebade, C.; Goerner, W.; Bode, P.

    2007-01-01

    Large-volume instrumental photon activation analysis (IPAA) was used for the investigation of shredded electronic waste material. Sample masses from 1 to 150 grams were analyzed to obtain an estimate of the minimum sample size to be taken to achieve a representativeness of the results which is satisfactory for a defined investigation task. Furthermore, the influence of irradiation and measurement parameters upon the quality of the analytical results were studied. Finally, the analytical data obtained from IPAA and instrumental neutron activation analysis (INAA), both carried out in a large-volume mode, were compared. Only parts of the values were found in satisfactory agreement. (author)

  7. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  8. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  9. A study of diabetes mellitus within a large sample of Australian twins

    DEFF Research Database (Denmark)

    Condon, Julianne; Shaw, Joanne E; Luciano, Michelle

    2008-01-01

    with type 2 diabetes (T2D), 41 female pairs with gestational diabetes (GD), 5 pairs with impaired glucose tolerance (IGT) and one pair with MODY. Heritabilities of T1D, T2D and GD were all high, but our samples did not have the power to detect effects of shared environment unless they were very large......Twin studies of diabetes mellitus can help elucidate genetic and environmental factors in etiology and can provide valuable biological samples for testing functional hypotheses, for example using expression and methylation studies of discordant pairs. We searched the volunteer Australian Twin...... Registry (19,387 pairs) for twins with diabetes using disease checklists from nine different surveys conducted from 1980-2000. After follow-up questionnaires to the twins and their doctors to confirm diagnoses, we eventually identified 46 pairs where one or both had type 1 diabetes (T1D), 113 pairs...

  10. Feasibility studies on large sample neutron activation analysis using a low power research reactor

    International Nuclear Information System (INIS)

    Gyampo, O.

    2008-06-01

    Instrumental neutron activation analysis (INAA) using Ghana Research Reactor-1 (GHARR-1) can be directly applied to samples with masses in grams. Samples weights were in the range of 0.5g to 5g. Therefore, the representativity of the sample is improved as well as sensitivity. Irradiation of samples was done using a low power research reactor. The correction for the neutron self-shielding within the sample is determined from measurement of the neutron flux depression just outside the sample. Correction for gamma ray self-attenuation in the sample was performed via linear attenuation coefficients derived from transmission measurements. Quantitative and qualitative analysis of data were done using gamma ray spectrometry (HPGe detector). The results of this study on the possibilities of large sample NAA using a miniature neutron source reactor (MNSR) show clearly that the Ghana Research Reactor-1 (GHARR-1) at the National Nuclear Research Institute (NNRI) can be used for sample analyses up to 5 grams (5g) using the pneumatic transfer systems.

  11. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  12. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  13. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  14. Economic and Humanistic Burden of Osteoarthritis: A Systematic Review of Large Sample Studies.

    Science.gov (United States)

    Xie, Feng; Kovic, Bruno; Jin, Xuejing; He, Xiaoning; Wang, Mengxiao; Silvestre, Camila

    2016-11-01

    Osteoarthritis (OA) consumes a significant amount of healthcare resources, and impairs the health-related quality of life (HRQoL) of patients. Previous reviews have consistently found substantial variations in the costs of OA across studies and countries. The comparability between studies was poor and limited the detection of the true differences between these studies. To review large sample studies on measuring the economic and/or humanistic burden of OA published since May 2006. We searched MEDLINE and EMBASE databases using comprehensive search strategies to identify studies reporting economic burden and HRQoL of OA. We included large sample studies if they had a sample size ≥1000 and measured the cost and/or HRQoL of OA. Reviewers worked independently and in duplicate, performing a cross-check between groups to verify agreement. Within- and between-group consolidation was performed to resolve discrepancies, with outstanding discrepancies being resolved by an arbitrator. The Kappa statistic was reported to assess the agreement between the reviewers. All costs were adjusted in their original currency to year 2015 using published inflation rates for the country where the study was conducted, and then converted to 2015 US dollars. A total of 651 articles were screened by title and abstract, 94 were reviewed in full text, and 28 were included in the final review. The Kappa value was 0.794. Twenty studies reported direct costs and nine reported indirect costs. The total annual average direct costs varied from US$1442 to US$21,335, both in USA. The annual average indirect costs ranged from US$238 to US$29,935. Twelve studies measured HRQoL using various instruments. The Short Form 12 version 2 scores ranged from 35.0 to 51.3 for the physical component, and from 43.5 to 55.0 for the mental component. Health utilities varied from 0.30 for severe OA to 0.77 for mild OA. Per-patient OA costs are considerable and a patient's quality of life remains poor. Variations in

  15. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  16. Associations between sociodemographic, sampling and health factors and various salivary cortisol indicators in a large sample without psychopathology

    NARCIS (Netherlands)

    Vreeburg, Sophie A.; Kruijtzer, Boudewijn P.; van Pelt, Johannes; van Dyck, Richard; DeRijk, Roel H.; Hoogendijk, Witte J. G.; Smit, Johannes H.; Zitman, Frans G.; Penninx, Brenda

    Background: Cortisol levels are increasingly often assessed in large-scale psychosomatic research. Although determinants of different salivary cortisol indicators have been described, they have not yet been systematically studied within the same study with a Large sample size. Sociodemographic,

  17. A review of methods for sampling large airborne particles and associated radioactivity

    International Nuclear Information System (INIS)

    Garland, J.A.; Nicholson, K.W.

    1990-01-01

    Radioactive particles, tens of μm or more in diameter, are unlikely to be emitted directly from nuclear facilities with exhaust gas cleansing systems, but may arise in the case of an accident or where resuspension from contaminated surfaces is significant. Such particles may dominate deposition and, according to some workers, may contribute to inhalation doses. Quantitative sampling of large airborne particles is difficult because of their inertia and large sedimentation velocities. The literature describes conditions for unbiased sampling and the magnitude of sampling errors for idealised sampling inlets in steady winds. However, few air samplers for outdoor use have been assessed for adequacy of sampling. Many size selective sampling methods are found in the literature but few are suitable at the low concentrations that are often encountered in the environment. A number of approaches for unbiased sampling of large particles have been found in the literature. Some are identified as meriting further study, for application in the measurement of airborne radioactivity. (author)

  18. Utilization of AHWR critical facility for research and development work on large sample NAA

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Reddy, A.V.R.; Verma, S.K.; De, S.K.

    2014-01-01

    The graphite reflector position of AHWR critical facility (CF) was utilized for analysis of large size (g-kg scale) samples using internal mono standard neutron activation analysis (IM-NAA). The reactor position was characterized by cadmium ratio method using In monitor for total flux and sub cadmium to epithermal flux ratio (f). Large sample neutron activation analysis (LSNAA) work was carried out for samples of stainless steel, ancient and new clay potteries and dross. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated. Radioactive assay was carried out using high resolution gamma ray spectrometry. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. Concentrations of Au and Ag were determined in not so homogeneous three large size samples of dross. An X-Z rotary scanning unit has been installed for counting large and not so homogeneous samples. (author)

  19. Study of a large rapid ashing apparatus and a rapid dry ashing method for biological samples and its application

    International Nuclear Information System (INIS)

    Jin Meisun; Wang Benli; Liu Wencang

    1988-04-01

    A large rapid-dry-ashing apparatus and a rapid ashing method for biological samples are described. The apparatus consists of specially made ashing furnace, gas supply system and temperature-programming control cabinet. The following adventages have been showed by ashing experiment with the above apparatus: (1) high speed of ashing and saving of electric energy; (2) The apparatus can ash a large amount of samples at a time; (3) The ashed sample is pure white (or spotless), loose and easily soluble with few content of residual char; (4) The fresh sample can also be ashed directly. The apparatus is suitable for ashing a large amount of the environmental samples containing low level radioactivity trace elements and the medical, food and agricultural research samples

  20. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  1. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  2. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  3. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  4. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  5. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    Science.gov (United States)

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  6. Analysis of large soil samples for actinides

    Science.gov (United States)

    Maxwell, III; Sherrod, L [Aiken, SC

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  7. A spinner magnetometer for large Apollo lunar samples

    Science.gov (United States)

    Uehara, M.; Gattacceca, J.; Quesnel, Y.; Lepaulard, C.; Lima, E. A.; Manfredi, M.; Rochette, P.

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10-7 Am2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  8. A spinner magnetometer for large Apollo lunar samples.

    Science.gov (United States)

    Uehara, M; Gattacceca, J; Quesnel, Y; Lepaulard, C; Lima, E A; Manfredi, M; Rochette, P

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10 -7 Am 2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  9. Genetic Influences on Pulmonary Function: A Large Sample Twin Study

    DEFF Research Database (Denmark)

    Ingebrigtsen, Truls S; Thomsen, Simon F; van der Sluis, Sophie

    2011-01-01

    Heritability of forced expiratory volume in one second (FEV(1)), forced vital capacity (FVC), and peak expiratory flow (PEF) has not been previously addressed in large twin studies. We evaluated the genetic contribution to individual differences observed in FEV(1), FVC, and PEF using data from...... the largest population-based twin study on spirometry. Specially trained lay interviewers with previous experience in spirometric measurements tested 4,314 Danish twins (individuals), 46-68 years of age, in their homes using a hand-held spirometer, and their flow-volume curves were evaluated. Modern variance...

  10. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  11. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  12. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  13. 105-DR Large Sodium Fire Facility decontamination, sampling, and analysis plan

    International Nuclear Information System (INIS)

    Knaus, Z.C.

    1995-01-01

    This is the decontamination, sampling, and analysis plan for the closure activities at the 105-DR Large Sodium Fire Facility at Hanford Reservation. This document supports the 105-DR Large Sodium Fire Facility Closure Plan, DOE-RL-90-25. The 105-DR LSFF, which operated from about 1972 to 1986, was a research laboratory that occupied the former ventilation supply room on the southwest side of the 105-DR Reactor facility in the 100-D Area of the Hanford Site. The LSFF was established to investigate fire fighting and safety associated with alkali metal fires in the liquid metal fast breeder reactor facilities. The decontamination, sampling, and analysis plan identifies the decontamination procedures, sampling locations, any special handling requirements, quality control samples, required chemical analysis, and data validation needed to meet the requirements of the 105-DR Large Sodium Fire Facility Closure Plan in compliance with the Resource Conservation and Recovery Act

  14. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  15. Large sample NAA facility and methodology development

    International Nuclear Information System (INIS)

    Roth, C.; Gugiu, D.; Barbos, D.; Datcu, A.; Aioanei, L.; Dobrea, D.; Taroiu, I. E.; Bucsa, A.; Ghinescu, A.

    2013-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) facility has been developed at the TRIGA- Annular Core Pulsed Reactor (ACPR) operated by the Institute for Nuclear Research in Pitesti, Romania. The central irradiation cavity of the ACPR core can accommodate a large irradiation device. The ACPR neutron flux characteristics are well known and spectrum adjustment techniques have been successfully applied to enhance the thermal component of the neutron flux in the central irradiation cavity. An analysis methodology was developed by using the MCNP code in order to estimate counting efficiency and correction factors for the major perturbing phenomena. Test experiments, comparison with classical instrumental neutron activation analysis (INAA) methods and international inter-comparison exercise have been performed to validate the new methodology. (authors)

  16. The Sex, Age, and Me study: recruitment and sampling for a large mixed-methods study of sexual health and relationships in an older Australian population.

    Science.gov (United States)

    Lyons, Anthony; Heywood, Wendy; Fileborn, Bianca; Minichiello, Victor; Barrett, Catherine; Brown, Graham; Hinchliff, Sharron; Malta, Sue; Crameri, Pauline

    2017-09-01

    Older people are often excluded from large studies of sexual health, as it is assumed that they are not having sex or are reluctant to talk about sensitive topics and are therefore difficult to recruit. We outline the sampling and recruitment strategies from a recent study on sexual health and relationships among older people. Sex, Age and Me was a nationwide Australian study that examined sexual health, relationship patterns, safer-sex practices and STI knowledge of Australians aged 60 years and over. The study used a mixed-methods approach to establish baseline levels of knowledge and to develop deeper insights into older adult's understandings and practices relating to sexual health. Data collection took place in 2015, with 2137 participants completing a quantitative survey and 53 participating in one-on-one semi-structured interviews. As the feasibility of this type of study has been largely untested until now, we provide detailed information on the study's recruitment strategies and methods. We also compare key characteristics of our sample with national estimates to assess its degree of representativeness. This study provides evidence to challenge the assumptions that older people will not take part in sexual health-related research and details a novel and successful way to recruit participants in this area.

  17. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  18. Determinants of salivary evening alpha-amylase in a large sample free of psychopathology

    NARCIS (Netherlands)

    Veen, Gerthe; Giltay, Erik J.; Vreeburg, Sophie A.; Licht, Carmilla M. M.; Cobbaert, Christa M.; Zitman, Frans G.; Penninx, Brenda W. J. H.

    Objective: Recently, salivary alpha-amylase (sAA) has been proposed as a suitable index for sympathetic activity and dysregulation of the autonomic nervous system (ANS). Although determinants of sAA have been described, they have not been studied within the same study with a large sample size

  19. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  20. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  1. Sampling of charged liquid radwaste stored in large tanks

    International Nuclear Information System (INIS)

    Tchemitcheff, E.; Domage, M.; Bernard-Bruls, X.

    1995-01-01

    The final safe disposal of radwaste, in France and elsewhere, entails, for liquid effluents, their conversion to a stable solid form, hence implying their conditioning. The production of conditioned waste with the requisite quality, traceability of the characteristics of the packages produced, and safe operation of the conditioning processes, implies at least the accurate knowledge of the chemical and radiochemical properties of the effluents concerned. The problem in sampling the normally charged effluents is aggravated for effluents that have been stored for several years in very large tanks, without stirring and retrieval systems. In 1992, SGN was asked by Cogema to study the retrieval and conditioning of LL/ML chemical sludge and spent ion-exchange resins produced in the operation of the UP2 400 plant at La Hague, and stored temporarily in rectangular silos and tanks. The sampling aspect was crucial for validating the inventories, identifying the problems liable to arise in the aging of the effluents, dimensioning the retrieval systems and checking the transferability and compatibility with the downstream conditioning process. Two innovative self-contained systems were developed and built for sampling operations, positioned above the tanks concerned. Both systems have been operated in active conditions and have proved totally satisfactory for taking representative samples. Today SGN can propose industrially proven overall solutions, adaptable to the various constraints of many spent fuel cycle operators

  2. 105-DR Large sodium fire facility soil sampling data evaluation report

    International Nuclear Information System (INIS)

    Adler, J.G.

    1996-01-01

    This report evaluates the soil sampling activities, soil sample analysis, and soil sample data associated with the closure activities at the 105-DR Large Sodium Fire Facility. The evaluation compares these activities to the regulatory requirements for meeting clean closure. The report concludes that there is no soil contamination from the waste treatment activities

  3. Large area synchrotron X-ray fluorescence mapping of biological samples

    International Nuclear Information System (INIS)

    Kempson, I.; Thierry, B.; Smith, E.; Gao, M.; De Jonge, M.

    2014-01-01

    Large area mapping of inorganic material in biological samples has suffered severely from prohibitively long acquisition times. With the advent of new detector technology we can now generate statistically relevant information for studying cell populations, inter-variability and bioinorganic chemistry in large specimen. We have been implementing ultrafast synchrotron-based XRF mapping afforded by the MAIA detector for large area mapping of biological material. For example, a 2.5 million pixel map can be acquired in 3 hours, compared to a typical synchrotron XRF set-up needing over 1 month of uninterrupted beamtime. Of particular focus to us is the fate of metals and nanoparticles in cells, 3D tissue models and animal tissues. The large area scanning has for the first time provided statistically significant information on sufficiently large numbers of cells to provide data on intercellular variability in uptake of nanoparticles. Techniques such as flow cytometry generally require analysis of thousands of cells for statistically meaningful comparison, due to the large degree of variability. Large area XRF now gives comparable information in a quantifiable manner. Furthermore, we can now image localised deposition of nanoparticles in tissues that would be highly improbable to 'find' by typical XRF imaging. In addition, the ultra fast nature also makes it viable to conduct 3D XRF tomography over large dimensions. This technology avails new opportunities in biomonitoring and understanding metal and nanoparticle fate ex-vivo. Following from this is extension to molecular imaging through specific anti-body targeted nanoparticles to label specific tissues and monitor cellular process or biological consequence

  4. A hard-to-read font reduces the framing effect in a large sample.

    Science.gov (United States)

    Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik

    2018-04-01

    How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.

  5. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  6. Association between genetic variation in a region on chromosome 11 and schizophrenia in large samples from Europe

    DEFF Research Database (Denmark)

    Rietschel, M; Mattheisen, M; Degenhardt, F

    2012-01-01

    the recruitment of very large samples of patients and controls (that is tens of thousands), or large, potentially more homogeneous samples that have been recruited from confined geographical areas using identical diagnostic criteria. Applying the latter strategy, we performed a genome-wide association study (GWAS...... between emotion regulation and cognition that is structurally and functionally abnormal in SCZ and bipolar disorder.Molecular Psychiatry advance online publication, 12 July 2011; doi:10.1038/mp.2011.80....

  7. Investigating sex differences in psychological predictors of snack intake among a large representative sample

    NARCIS (Netherlands)

    Adriaanse, M.A.; Evers, C.; Verhoeven, A.A.C.; de Ridder, D.T.D.

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of

  8. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively

  9. A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia

    DEFF Research Database (Denmark)

    Ingason, Andrés; Giegling, Ina; Cichon, Sven

    2010-01-01

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample....... Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both...... as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia....

  10. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  11. An open-flow pulse ionization chamber for alpha spectrometry of large-area samples

    International Nuclear Information System (INIS)

    Johansson, L.; Roos, B.; Samuelsson, C.

    1992-01-01

    The presented open-flow pulse ionization chamber was developed to make alpha spectrometry on large-area surfaces easy. One side of the chamber is left open, where the sample is to be placed. The sample acts as a chamber wall and therby defeins the detector volume. The sample area can be as large as 400 cm 2 . To prevent air from entering the volume there is a constant gas flow through the detector, coming in at the bottom of the chamber and leaking at the sides of the sample. The method results in good energy resolution and has considerable applicability in the retrospective radon research. Alpha spectra obtained in the retrospective measurements descend from 210 Po, built up in the sample from the radon daughters recoiled into a glass surface. (au)

  12. Fast concentration of dissolved forms of cesium radioisotopes from large seawater samples

    International Nuclear Information System (INIS)

    Jan Kamenik; Henrieta Dulaiova; Ferdinand Sebesta; Kamila St'astna; Czech Technical University, Prague

    2013-01-01

    The method developed for cesium concentration from large freshwater samples was tested and adapted for analysis of cesium radionuclides in seawater. Concentration of dissolved forms of cesium in large seawater samples (about 100 L) was performed using composite absorbers AMP-PAN and KNiFC-PAN with ammonium molybdophosphate and potassium–nickel hexacyanoferrate(II) as active components, respectively, and polyacrylonitrile as a binding polymer. A specially designed chromatography column with bed volume (BV) 25 mL allowed fast flow rates of seawater (up to 1,200 BV h -1 ). The recovery yields were determined by ICP-MS analysis of stable cesium added to seawater sample. Both absorbers proved usability for cesium concentration from large seawater samples. KNiFC-PAN material was slightly more effective in cesium concentration from acidified seawater (recovery yield around 93 % for 700 BV h -1 ). This material showed similar efficiency in cesium concentration also from natural seawater. The activity concentrations of 137 Cs determined in seawater from the central Pacific Ocean were 1.5 ± 0.1 and 1.4 ± 0.1 Bq m -3 for an offshore (January 2012) and a coastal (February 2012) locality, respectively, 134 Cs activities were below detection limit ( -3 ). (author)

  13. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  14. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  15. Relationship of fish indices with sampling effort and land use change in a large Mediterranean river.

    Science.gov (United States)

    Almeida, David; Alcaraz-Hernández, Juan Diego; Merciai, Roberto; Benejam, Lluís; García-Berthou, Emili

    2017-12-15

    Fish are invaluable ecological indicators in freshwater ecosystems but have been less used for ecological assessments in large Mediterranean rivers. We evaluated the effects of sampling effort (transect length) on fish metrics, such as species richness and two fish indices (the new European Fish Index EFI+ and a regional index, IBICAT2b), in the mainstem of a large Mediterranean river. For this purpose, we sampled by boat electrofishing five sites each with 10 consecutive transects corresponding to a total length of 20 times the river width (European standard required by the Water Framework Directive) and we also analysed the effect of sampling area on previous surveys. Species accumulation curves and richness extrapolation estimates in general suggested that species richness was reasonably estimated with transect lengths of 10 times the river width or less. The EFI+ index was significantly affected by sampling area, both for our samplings and previous data. Surprisingly, EFI+ values in general decreased with increasing sampling area, despite the higher observed richness, likely because the expected values of metrics were higher. By contrast, the regional fish index was not dependent on sampling area, likely because it does not use a predictive model. Both fish indices, but particularly the EFI+, decreased with less forest cover percentage, even within the smaller disturbance gradient in the river type studied (mainstem of a large Mediterranean river, where environmental pressures are more general). Although the two fish-based indices are very different in terms of their development, methodology, and metrics used, they were significantly correlated and provided a similar assessment of ecological status. Our results reinforce the importance of standardization of sampling methods for bioassessment and suggest that predictive models that use sampling area as a predictor might be more affected by differences in sampling effort than simpler biotic indices. Copyright

  16. Moving into a new era of periodontal genetic studies: relevance of large case-control samples using severe phenotypes for genome-wide association studies.

    Science.gov (United States)

    Vaithilingam, R D; Safii, S H; Baharuddin, N A; Ng, C C; Cheong, S C; Bartold, P M; Schaefer, A S; Loos, B G

    2014-12-01

    Studies to elucidate the role of genetics as a risk factor for periodontal disease have gone through various phases. In the majority of cases, the initial 'hypothesis-dependent' candidate-gene polymorphism studies did not report valid genetic risk loci. Following a large-scale replication study, these initially positive results are believed to be caused by type 1 errors. However, susceptibility genes, such as CDKN2BAS (Cyclin Dependend KiNase 2B AntiSense RNA; alias ANRIL [ANtisense Rna In the Ink locus]), glycosyltransferase 6 domain containing 1 (GLT6D1) and cyclooxygenase 2 (COX2), have been reported as conclusive risk loci of periodontitis. The search for genetic risk factors accelerated with the advent of 'hypothesis-free' genome-wide association studies (GWAS). However, despite many different GWAS being performed for almost all human diseases, only three GWAS on periodontitis have been published - one reported genome-wide association of GLT6D1 with aggressive periodontitis (a severe phenotype of periodontitis), whereas the remaining two, which were performed on patients with chronic periodontitis, were not able to find significant associations. This review discusses the problems faced and the lessons learned from the search for genetic risk variants of periodontitis. Current and future strategies for identifying genetic variance in periodontitis, and the importance of planning a well-designed genetic study with large and sufficiently powered case-control samples of severe phenotypes, are also discussed. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    Science.gov (United States)

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling

  18. Psychometric Evaluation of the Thought–Action Fusion Scale in a Large Clinical Sample

    Science.gov (United States)

    Meyer, Joseph F.; Brown, Timothy A.

    2015-01-01

    This study examined the psychometric properties of the 19-item Thought–Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct. PMID:22315482

  19. Psychometric evaluation of the thought-action fusion scale in a large clinical sample.

    Science.gov (United States)

    Meyer, Joseph F; Brown, Timothy A

    2013-12-01

    This study examined the psychometric properties of the 19-item Thought-Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct.

  20. The Brief Negative Symptom Scale (BNSS): Independent validation in a large sample of Italian patients with schizophrenia.

    Science.gov (United States)

    Mucci, A; Galderisi, S; Merlotti, E; Rossi, A; Rocca, P; Bucci, P; Piegari, G; Chieffi, M; Vignapiano, A; Maj, M

    2015-07-01

    The Brief Negative Symptom Scale (BNSS) was developed to address the main limitations of the existing scales for the assessment of negative symptoms of schizophrenia. The initial validation of the scale by the group involved in its development demonstrated good convergent and discriminant validity, and a factor structure confirming the two domains of negative symptoms (reduced emotional/verbal expression and anhedonia/asociality/avolition). However, only relatively small samples of patients with schizophrenia were investigated. Further independent validation in large clinical samples might be instrumental to the broad diffusion of the scale in clinical research. The present study aimed to examine the BNSS inter-rater reliability, convergent/discriminant validity and factor structure in a large Italian sample of outpatients with schizophrenia. Our results confirmed the excellent inter-rater reliability of the BNSS (the intraclass correlation coefficient ranged from 0.81 to 0.98 for individual items and was 0.98 for the total score). The convergent validity measures had r values from 0.62 to 0.77, while the divergent validity measures had r values from 0.20 to 0.28 in the main sample (n=912) and in a subsample without clinically significant levels of depression and extrapyramidal symptoms (n=496). The BNSS factor structure was supported in both groups. The study confirms that the BNSS is a promising measure for quantifying negative symptoms of schizophrenia in large multicenter clinical studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  1. Automated, feature-based image alignment for high-resolution imaging mass spectrometry of large biological samples

    NARCIS (Netherlands)

    Broersen, A.; Liere, van R.; Altelaar, A.F.M.; Heeren, R.M.A.; McDonnell, L.A.

    2008-01-01

    High-resolution imaging mass spectrometry of large biological samples is the goal of several research groups. In mosaic imaging, the most common method, the large sample is divided into a mosaic of small areas that are then analyzed with high resolution. Here we present an automated alignment

  2. Tracing the trajectory of skill learning with a very large sample of online game players.

    Science.gov (United States)

    Stafford, Tom; Dewar, Michael

    2014-02-01

    In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.

  3. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  4. Large contribution of human papillomavirus in vaginal neoplastic lesions: a worldwide study in 597 samples.

    Science.gov (United States)

    Alemany, L; Saunier, M; Tinoco, L; Quirós, B; Alvarado-Cabrero, I; Alejo, M; Joura, E A; Maldonado, P; Klaustermeier, J; Salmerón, J; Bergeron, C; Petry, K U; Guimerà, N; Clavero, O; Murillo, R; Clavel, C; Wain, V; Geraets, D T; Jach, R; Cross, P; Carrilho, C; Molina, C; Shin, H R; Mandys, V; Nowakowski, A M; Vidal, A; Lombardi, L; Kitchener, H; Sica, A R; Magaña-León, C; Pawlita, M; Quint, W; Bravo, I G; Muñoz, N; de Sanjosé, S; Bosch, F X

    2014-11-01

    This work describes the human papillomavirus (HPV) prevalence and the HPV type distribution in a large series of vaginal intraepithelial neoplasia (VAIN) grades 2/3 and vaginal cancer worldwide. We analysed 189 VAIN 2/3 and 408 invasive vaginal cancer cases collected from 31 countries from 1986 to 2011. After histopathological evaluation of sectioned formalin-fixed paraffin-embedded samples, HPV DNA detection and typing was performed using the SPF-10/DNA enzyme immunoassay (DEIA)/LiPA25 system (version 1). A subset of 146 vaginal cancers was tested for p16(INK4a) expression, a cellular surrogate marker for HPV transformation. Prevalence ratios were estimated using multivariate Poisson regression with robust variance. HPV DNA was detected in 74% (95% confidence interval (CI): 70-78%) of invasive cancers and in 96% (95% CI: 92-98%) of VAIN 2/3. Among cancers, the highest detection rates were observed in warty-basaloid subtype of squamous cell carcinomas, and in younger ages. Concerning the type-specific distribution, HPV16 was the most frequently type detected in both precancerous and cancerous lesions (59%). p16(INK4a) overexpression was found in 87% of HPV DNA positive vaginal cancer cases. HPV was identified in a large proportion of invasive vaginal cancers and in almost all VAIN 2/3. HPV16 was the most common type detected. A large impact in the reduction of the burden of vaginal neoplastic lesions is expected among vaccinated cohorts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  6. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study.

    Science.gov (United States)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-04-11

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population-based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  7. Absolute activity determinations on large volume geological samples independent of self-absorption effects

    International Nuclear Information System (INIS)

    Wilson, W.E.

    1980-01-01

    This paper describes a method for measuring the absolute activity of large volume samples by γ-spectroscopy independent of self-absorption effects using Ge detectors. The method yields accurate matrix independent results at the expense of replicative counting of the unknown sample. (orig./HP)

  8. Psychometric Properties of the Penn State Worry Questionnaire for Children in a Large Clinical Sample

    Science.gov (United States)

    Pestle, Sarah L.; Chorpita, Bruce F.; Schiffman, Jason

    2008-01-01

    The Penn State Worry Questionnaire for Children (PSWQ-C; Chorpita, Tracey, Brown, Collica, & Barlow, 1997) is a 14-item self-report measure of worry in children and adolescents. Although the PSWQ-C has demonstrated favorable psychometric properties in small clinical and large community samples, this study represents the first psychometric…

  9. Procedure for plutonium analysis of large (100g) soil and sediment samples

    International Nuclear Information System (INIS)

    Meadows, J.W.T.; Schweiger, J.S.; Mendoza, B.; Stone, R.

    1975-01-01

    A method for the complete dissolution of large soil or sediment samples is described. This method is in routine usage at Lawrence Livermore Laboratory for the analysis of fall-out levels of Pu in soils and sediments. Intercomparison with partial dissolution (leach) techniques shows the complete dissolution method to be superior for the determination of plutonium in a wide variety of environmental samples. (author)

  10. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  11. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  12. 17 CFR Appendix B to Part 420 - Sample Large Position Report

    Science.gov (United States)

    2010-04-01

    ..., and as collateral for financial derivatives and other securities transactions $ Total Memorandum 1... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Sample Large Position Report B Appendix B to Part 420 Commodity and Securities Exchanges DEPARTMENT OF THE TREASURY REGULATIONS UNDER...

  13. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study

    International Nuclear Information System (INIS)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-01-01

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population–based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  14. Development of Large Sample Neutron Activation Technique for New Applications in Thailand

    International Nuclear Information System (INIS)

    Laoharojanaphand, S.; Tippayakul, C.; Wonglee, S.; Channuie, J.

    2018-01-01

    The development of the Large Sample Neutron Activation Analysis (LSNAA) in Thailand is presented in this paper. The technique had been firstly developed with rice sample as the test subject. The Thai Research Reactor-1/Modification 1 (TRR-1/M1) was used as the neutron source. The first step was to select and characterize an appropriate irradiation facility for the research. An out-core irradiation facility (A4 position) was first attempted. The results performed with the A4 facility were then used as guides for the subsequent experiments with the thermal column facility. The characterization of the thermal column was performed with Cu-wire to determine spatial distribution without and with rice sample. The flux depression without rice sample was observed to be less than 30% while the flux depression with rice sample increased to within 60%. The flux monitors internal to the rice sample were used to determine average flux over the rice sample. The gamma selfshielding effect during gamma measurement was corrected using the Monte Carlo simulation. The ratio between the efficiencies of the volume source and the point source for each energy point was calculated by the MCNPX code. The research team adopted the k0-NAA methodology to calculate the element concentration in the research. The k0-NAA program which developed by IAEA was set up to simulate the conditions of the irradiation and measurement facilities used in this research. The element concentrations in the bulk rice sample were then calculated taking into account the flux depression and gamma efficiency corrections. At the moment, the results still show large discrepancies with the reference values. However, more research on the validation will be performed to identify sources of errors. Moreover, this LS-NAA technique was introduced for the activation analysis of the IAEA archaeological mock-up. The results are provided in this report. (author)

  15. CO2 isotope analyses using large air samples collected on intercontinental flights by the CARIBIC Boeing 767

    NARCIS (Netherlands)

    Assonov, S.S.; Brenninkmeijer, C.A.M.; Koeppel, C.; Röckmann, T.

    2009-01-01

    Analytical details for 13C and 18O isotope analyses of atmospheric CO2 in large air samples are given. The large air samples of nominally 300 L were collected during the passenger aircraft-based atmospheric chemistry research project CARIBIC and analyzed for a large number of trace gases and

  16. Field sampling, preparation procedure and plutonium analyses of large freshwater samples

    International Nuclear Information System (INIS)

    Straelberg, E.; Bjerk, T.O.; Oestmo, K.; Brittain, J.E.

    2002-01-01

    This work is part of an investigation of the mobility of plutonium in freshwater systems containing humic substances. A well-defined bog-stream system located in the catchment area of a subalpine lake, Oevre Heimdalsvatn, Norway, is being studied. During the summer of 1999, six water samples were collected from the tributary stream Lektorbekken and the lake itself. However, the analyses showed that the plutonium concentration was below the detection limit in all the samples. Therefore renewed sampling at the same sites was carried out in August 2000. The results so far are in agreement with previous analyses from the Heimdalen area. However, 100 times higher concentrations are found in the lowlands in the eastern part of Norway. The reason for this is not understood, but may be caused by differences in the concentrations of humic substances and/or the fact that the mountain areas are covered with snow for a longer period of time every year. (LN)

  17. Neurocognitive impairment in a large sample of homeless adults with mental illness.

    Science.gov (United States)

    Stergiopoulos, V; Cusi, A; Bekele, T; Skosireva, A; Latimer, E; Schütz, C; Fernando, I; Rourke, S B

    2015-04-01

    This study examines neurocognitive functioning in a large, well-characterized sample of homeless adults with mental illness and assesses demographic and clinical factors associated with neurocognitive performance. A total of 1500 homeless adults with mental illness enrolled in the At Home Chez Soi study completed neuropsychological measures assessing speed of information processing, memory, and executive functioning. Sociodemographic and clinical data were also collected. Linear regression analyses were conducted to examine factors associated with neurocognitive performance. Approximately half of our sample met criteria for psychosis, major depressive disorder, and alcohol or substance use disorder, and nearly half had experienced severe traumatic brain injury. Overall, 72% of participants demonstrated cognitive impairment, including deficits in processing speed (48%), verbal learning (71%) and recall (67%), and executive functioning (38%). The overall statistical model explained 19.8% of the variance in the neurocognitive summary score, with reduced neurocognitive performance associated with older age, lower education, first language other than English or French, Black or Other ethnicity, and the presence of psychosis. Homeless adults with mental illness experience impairment in multiple neuropsychological domains. Much of the variance in our sample's cognitive performance remains unexplained, highlighting the need for further research in the mechanisms underlying cognitive impairment in this population. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Detecting superior face recognition skills in a large sample of young British adults

    Directory of Open Access Journals (Sweden)

    Anna Katarzyna Bobak

    2016-09-01

    Full Text Available The Cambridge Face Memory Test Long Form (CFMT+ and Cambridge Face Perception Test (CFPT are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognisers are discussed.

  19. Large sample neutron activation analysis: establishment at CDTN/CNEN, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Maria Angela de B.C., E-mail: menezes@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Jacimovic, Radojko, E-mail: radojko.jacimovic@ijs.s [Jozef Stefan Institute, Ljubljana (Slovenia). Dept. of Environmental Sciences. Group for Radiochemistry and Radioecology

    2011-07-01

    In order to improve the application of the neutron activation technique at CDTN/CNEN, the large sample instrumental neutron activation analysis is being established, IAEA BRA 14798 and FAPEMIG APQ-01259-09 projects. This procedure, LS-INAA, usually requires special facilities for the activation as well as for the detection. However, the TRIGA Mark I IPR R1, CDTN/CNEN has not been adapted for the irradiation and the usual gamma spectrometry has being carried out. To start the establishment of the LS-INAA, a 5g sample - IAEA/Soil 7 reference material was analyzed by k{sub 0}-standardized method. This paper is about the detector efficiency over the volume source using KayWin v2.23 and ANGLE V3.0 software. (author)

  20. Large-volume injection of sample diluents not miscible with the mobile phase as an alternative approach in sample preparation for bioanalysis: an application for fenspiride bioequivalence.

    Science.gov (United States)

    Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor

    2011-09-01

    Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.

  1. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  2. Evaluation of environmental sampling methods for detection of Salmonella enterica in a large animal veterinary hospital.

    Science.gov (United States)

    Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey

    2018-04-01

    Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.

  3. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    Science.gov (United States)

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  4. In-situ high resolution particle sampling by large time sequence inertial spectrometry

    International Nuclear Information System (INIS)

    Prodi, V.; Belosi, F.

    1990-09-01

    In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)

  5. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  6. Elemental mapping of large samples by external ion beam analysis with sub-millimeter resolution and its applications

    Science.gov (United States)

    Silva, T. F.; Rodrigues, C. L.; Added, N.; Rizzutto, M. A.; Tabacniks, M. H.; Mangiarotti, A.; Curado, J. F.; Aguirre, F. R.; Aguero, N. F.; Allegro, P. R. P.; Campos, P. H. O. V.; Restrepo, J. M.; Trindade, G. F.; Antonio, M. R.; Assis, R. F.; Leite, A. R.

    2018-05-01

    The elemental mapping of large areas using ion beam techniques is a desired capability for several scientific communities, involved on topics ranging from geoscience to cultural heritage. Usually, the constraints for large-area mapping are not met in setups employing micro- and nano-probes implemented all over the world. A novel setup for mapping large sized samples in an external beam was recently built at the University of São Paulo employing a broad MeV-proton probe with sub-millimeter dimension, coupled to a high-precision large range XYZ robotic stage (60 cm range in all axis and precision of 5 μ m ensured by optical sensors). An important issue on large area mapping is how to deal with the irregularities of the sample's surface, that may introduce artifacts in the images due to the variation of the measuring conditions. In our setup, we implemented an automatic system based on machine vision to correct the position of the sample to compensate for its surface irregularities. As an additional benefit, a 3D digital reconstruction of the scanned surface can also be obtained. Using this new and unique setup, we have produced large-area elemental maps of ceramics, stones, fossils, and other sort of samples.

  7. Adolescent Internet Abuse: A Study on the Role of Attachment to Parents and Peers in a Large Community Sample.

    Science.gov (United States)

    Ballarotto, Giulia; Volpi, Barbara; Marzilli, Eleonora; Tambelli, Renata

    2018-01-01

    Adolescents are the main users of new technologies and their main purpose of use is social interaction. Although new technologies are useful to teenagers, in addressing their developmental tasks, recent studies have shown that they may be an obstacle in their growth. Research shows that teenagers with Internet addiction experience lower quality in their relationships with parents and more individual difficulties. However, limited research is available on the role played by adolescents' attachment to parents and peers, considering their psychological profiles. We evaluated in a large community sample of adolescents ( N = 1105) the Internet use/abuse, the adolescents' attachment to parents and peers, and their psychological profiles. Hierarchical regression analyses were conducted to verify the influence of parental and peer attachment on Internet use/abuse, considering the moderating effect of adolescents' psychopathological risk. Results showed that adolescents' attachment to parents had a significant effect on Internet use. Adolescents' psychopathological risk had a moderating effect on the relationship between attachment to mothers and Internet use. Our study shows that further research is needed, taking into account both individual and family variables.

  8. Adolescent Internet Abuse: A Study on the Role of Attachment to Parents and Peers in a Large Community Sample

    Directory of Open Access Journals (Sweden)

    Giulia Ballarotto

    2018-01-01

    Full Text Available Adolescents are the main users of new technologies and their main purpose of use is social interaction. Although new technologies are useful to teenagers, in addressing their developmental tasks, recent studies have shown that they may be an obstacle in their growth. Research shows that teenagers with Internet addiction experience lower quality in their relationships with parents and more individual difficulties. However, limited research is available on the role played by adolescents’ attachment to parents and peers, considering their psychological profiles. We evaluated in a large community sample of adolescents (N=1105 the Internet use/abuse, the adolescents’ attachment to parents and peers, and their psychological profiles. Hierarchical regression analyses were conducted to verify the influence of parental and peer attachment on Internet use/abuse, considering the moderating effect of adolescents’ psychopathological risk. Results showed that adolescents’ attachment to parents had a significant effect on Internet use. Adolescents’ psychopathological risk had a moderating effect on the relationship between attachment to mothers and Internet use. Our study shows that further research is needed, taking into account both individual and family variables.

  9. Rapid separation method for {sup 237}Np and Pu isotopes in large soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod L., E-mail: sherrod.maxwell@srs.go [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States); Culligan, Brian K.; Noyes, Gary W. [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States)

    2011-07-15

    A new rapid method for the determination of {sup 237}Np and Pu isotopes in soil and sediment samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used for large soil samples. The new soil method utilizes an acid leaching method, iron/titanium hydroxide precipitation, a lanthanum fluoride soil matrix removal step, and a rapid column separation process with TEVA Resin. The large soil matrix is removed easily and rapidly using these two simple precipitations with high chemical recoveries and effective removal of interferences. Vacuum box technology and rapid flow rates are used to reduce analytical time.

  10. Large-sample neutron activation analysis in mass balance and nutritional studies

    NARCIS (Netherlands)

    van de Wiel, A.; Blaauw, Menno

    2018-01-01

    Low concentrations of elements in food can be measured with various techniques, mostly in small samples (mg). These techniques provide only reliable data when the element is distributed homogeneously in the material to be analysed either naturally or after a homogenisation procedure. When this is

  11. Application of Conventional and K0-Based Internal Monostandard NAA Using Reactor Neutrons for Compositional Analysis of Large Samples

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.; Swain, K. K.; Pujari, P.K.

    2018-01-01

    Large sample neutron activation analysis (LSNAA) work was carried out for samples of coal, uranium ore, stainless steel, ancient and new clay potteries, dross and clay pottery replica from Peru using low flux high thermalized irradiation sites. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated using thermal column (TC) facility of Apsara reactor as well as graphite reflector position of critical facility (CF) at Bhabha Atomic Research Centre, Mumbai. Small size (10 - 500 mg) samples were also irradiated at core position of Apsara reactor, pneumatic carrier facility (PCF) of Dhruva reactor and pneumatic fast transfer facility (PFTS) of KAMINI reactor. Irradiation positions were characterized using indium flux monitor for TC and CF whereas multi monitors were used at other positions. Radioactive assay was carried out using high resolution gamma ray spectrometry. The k0-based internal monostandard NAA (IM-NAA) method was used to determine elemental concentration ratios with respect to Na in coal and uranium ore samples, Sc in pottery samples and Fe in stainless steel. Insitu relative detection efficiency for each irradiated sample was obtained using γ rays of activation products in the required energy range. Representative sample sizes were arrived at for coal and uranium ore from the plots of La/Na ratios as a function of the mass of the sample. For stainless steel sample of SS 304L, the absolute concentrations were calculated from concentration ratios by mass balance approach since all the major elements (Fe, Cr, Ni and Mn) were amenable to NAA. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. The La to Ce concentration ratios were used for preliminary grouping and concentration ratios of 15 elements with respect to Sc were used by statistical cluster analysis for confirmation of grouping. Concentrations of Au and Ag were determined in not so

  12. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  13. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  14. $b$-Tagging and Large Radius Jet Modelling in a $g\\rightarrow b\\bar{b}$ rich sample at ATLAS

    CERN Document Server

    Jiang, Zihao; The ATLAS collaboration

    2016-01-01

    Studies of b-tagging performance and jet properties in double b-tagged, large radius jets from sqrt(s)=8 TeV pp collisions recorded by the ATLAS detector at the LHC are presented. The double b-tag requirement yields a sample rich in high pT jets originating from the g->bb process. Using this sample, the performance of b-tagging and modelling of jet substructure variables at small b-quark angular separation is probed.

  15. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  16. The ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES) . I. Project description, survey sample, and quality assessment

    Science.gov (United States)

    Cox, Nick L. J.; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J.; Marshall, Charlotte C. M.; Smith, Keith T.; Evans, Christopher J.; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A.; Joblin, Christine; van Loon, Jacco Th.; Foing, Bernard H.; Bhatt, Neil H.; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G.; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco

    2017-10-01

    The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the C60^+ fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort tosystematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral-type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution (R 70 000-100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305-1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided.

  17. Development of digital gamma-activation autoradiography for analysis of samples of large area

    International Nuclear Information System (INIS)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I.

    2011-01-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  18. Development of digital gamma-activation autoradiography for analysis of samples of large area

    Energy Technology Data Exchange (ETDEWEB)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I. [Russian Academy of Sciences, Moscow (Russian Federation). Vernadsky Inst. of Geochemistry and Analytical Chemistry

    2011-07-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  19. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  20. A large-scale cryoelectronic system for biological sample banking

    Science.gov (United States)

    Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.

    2009-11-01

    We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.

  1. Superwind Outflows in Seyfert Galaxies? : Large-Scale Radio Maps of an Edge-On Sample

    Science.gov (United States)

    Colbert, E.; Gallimore, J.; Baum, S.; O'Dea, C.

    1995-03-01

    Large-scale galactic winds (superwinds) are commonly found flowing out of the nuclear region of ultraluminous infrared and powerful starburst galaxies. Stellar winds and supernovae from the nuclear starburst provide the energy to drive these superwinds. The outflowing gas escapes along the rotation axis, sweeping up and shock-heating clouds in the halo, which produces optical line emission, radio synchrotron emission, and X-rays. These features can most easily be studied in edge-on systems, so that the wind emission is not confused by that from the disk. We have begun a systematic search for superwind outflows in Seyfert galaxies. In an earlier optical emission-line survey, we found extended minor axis emission and/or double-peaked emission line profiles in >~30% of the sample objects. We present here large-scale (6cm VLA C-config) radio maps of 11 edge-on Seyfert galaxies, selected (without bias) from a distance-limited sample of 23 edge-on Seyferts. These data have been used to estimate the frequency of occurrence of superwinds. Preliminary results indicate that four (36%) of the 11 objects observed and six (26%) of the 23 objects in the distance-limited sample have extended radio emission oriented perpendicular to the galaxy disk. This emission may be produced by a galactic wind blowing out of the disk. Two (NGC 2992 and NGC 5506) of the nine objects for which we have both radio and optical data show good evidence for a galactic wind in both datasets. We suggest that galactic winds occur in >~30% of all Seyferts. A goal of this work is to find a diagnostic that can be used to distinguish between large-scale outflows that are driven by starbursts and those that are driven by an AGN. The presence of starburst-driven superwinds in Seyferts, if established, would have important implications for the connection between starburst galaxies and AGN.

  2. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  3. Determination of 129I in large soil samples after alkaline wet disintegration

    International Nuclear Information System (INIS)

    Bunzl, K.; Kracke, W.

    1992-01-01

    Large soil samples (up to 500 g) can conveniently be disintegrated by hydrogen peroxide in an utility tank under alkaline conditions to determine subsequently 129 I by neutron activation analysis. Interfering elements such as Br are removed already before neutron irradiation to reduce the radiation exposure of the personnel. The precision of the method is 129 I also by the combustion method. (orig.)

  4. Lack of association between digit ratio (2D:4D) and assertiveness: replication in a large sample.

    Science.gov (United States)

    Voracek, Martin

    2009-12-01

    Findings regarding within-sex associations of digit ratio (2D:4D), a putative pointer to long-lasting effects of prenatal androgen action, and sexually differentiated personality traits have generally been inconsistent or unreplicable, suggesting that effects in this domain, if any, are likely small. In contrast to evidence from Wilson's important 1983 study, a forerunner of modern 2D:4D research, two recent studies in 2005 and 2008 by Freeman, et al. and Hampson, et al. showed assertiveness, a presumably male-typed personality trait, was not associated with 2D:4D; however, these studies were clearly statistically underpowered. Hence this study examined this question anew, based on a large sample of 491 men and 627 women. Assertiveness was only modestly sexually differentiated, favoring men, and a positive correlate of age and education and a negative correlate of weight and Body Mass Index among women, but not men. Replicating the two prior studies, 2D:4D was throughout unrelated to assertiveness scores. This null finding was preserved with controls for correlates of assertiveness, also in nonparametric analysis and with tests for curvilinear relations. Discussed are implications of this specific null finding, now replicated in a large sample, for studies of 2D:4D and personality in general and novel research approaches to proceed in this field.

  5. Water pollution screening by large-volume injection of aqueous samples and application to GC/MS analysis of a river Elbe sample

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, S.; Efer, J.; Engewald, W. [Leipzig Univ. (Germany). Inst. fuer Analytische Chemie

    1997-03-01

    The large-volume sampling of aqueous samples in a programmed temperature vaporizer (PTV) injector was used successfully for the target and non-target analysis of real samples. In this still rarely applied method, e.g., 1 mL of the water sample to be analyzed is slowly injected direct into the PTV. The vaporized water is eliminated through the split vent. The analytes are concentrated onto an adsorbent inside the insert and subsequently thermally desorbed. The capability of the method is demonstrated using a sample from the river Elbe. By means of coupling this method with a mass selective detector in SIM mode (target analysis) the method allows the determination of pollutants in the concentration range up to 0.01 {mu}g/L. Furthermore, PTV enrichment is an effective and time-saving method for non-target analysis in SCAN mode. In a sample from the river Elbe over 20 compounds were identified. (orig.) With 3 figs., 2 tabs.

  6. Oxalic acid as a liquid dosimeter for absorbed dose measurement in large-scale of sample solution

    International Nuclear Information System (INIS)

    Biramontri, S.; Dechburam, S.; Vitittheeranon, A.; Wanitsuksombut, W.; Thongmitr, W.

    1999-01-01

    This study shows the feasibility for, applying 2.5 mM aqueous oxalic acid solution using spectrophotometric analysis method for absorbed dose measurement from 1 to 10 kGy in a large-scale of sample solution. The optimum wavelength of 220 nm was selected. The stability of the response of the dosimeter over 25 days was better than 1 % for unirradiated and ± 2% for irradiated solution. The reproducibility in the same batch was within 1%. The variation of the dosimeter response between batches was also studied. (author)

  7. Association between time perspective and organic food consumption in a large sample of adults.

    Science.gov (United States)

    Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine

    2018-01-05

    Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of

  8. A Survey for Spectroscopic Binaries in a Large Sample of G Dwarfs

    Science.gov (United States)

    Udry, S.; Mayor, M.; Latham, D. W.; Stefanik, R. P.; Torres, G.; Mazeh, T.; Goldberg, D.; Andersen, J.; Nordstrom, B.

    For more than 5 years now, the radial velocities for a large sample of G dwarfs (3,347 stars) have been monitored in order to obtain an unequaled set of orbital parameters for solar-type stars (~400 orbits, up to now). This survey provides a considerable improvement on the classical systematic study by Duquennoy and Mayor (1991; DM91). The observational part of the survey has been carried out in the context of a collaboration between the Geneva Observatory on the two coravel spectrometers for the southern sky and CfA at Oakridge and Whipple Observatories for the northern sky. As a first glance at these new results, we will address in this contribution a special aspect of the orbital eccentricity distribution, namely the disappearance of the void observed in DM91 for quasi-circular orbits with periods larger than 10 days.

  9. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    Science.gov (United States)

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  10. SyPRID sampler: A large-volume, high-resolution, autonomous, deep-ocean precision plankton sampling system

    Science.gov (United States)

    Billings, Andrew; Kaiser, Carl; Young, Craig M.; Hiebert, Laurel S.; Cole, Eli; Wagner, Jamie K. S.; Van Dover, Cindy Lee

    2017-03-01

    The current standard for large-volume (thousands of cubic meters) zooplankton sampling in the deep sea is the MOCNESS, a system of multiple opening-closing nets, typically lowered to within 50 m of the seabed and towed obliquely to the surface to obtain low-spatial-resolution samples that integrate across 10 s of meters of water depth. The SyPRID (Sentry Precision Robotic Impeller Driven) sampler is an innovative, deep-rated (6000 m) plankton sampler that partners with the Sentry Autonomous Underwater Vehicle (AUV) to obtain paired, large-volume plankton samples at specified depths and survey lines to within 1.5 m of the seabed and with simultaneous collection of sensor data. SyPRID uses a perforated Ultra-High-Molecular-Weight (UHMW) plastic tube to support a fine mesh net within an outer carbon composite tube (tube-within-a-tube design), with an axial flow pump located aft of the capture filter. The pump facilitates flow through the system and reduces or possibly eliminates the bow wave at the mouth opening. The cod end, a hollow truncated cone, is also made of UHMW plastic and includes a collection volume designed to provide an area where zooplankton can collect, out of the high flow region. SyPRID attaches as a saddle-pack to the Sentry vehicle. Sentry itself is configured with a flight control system that enables autonomous survey paths to low altitudes. In its verification deployment at the Blake Ridge Seep (2160 m) on the US Atlantic Margin, SyPRID was operated for 6 h at an altitude of 5 m. It recovered plankton samples, including delicate living larvae, from the near-bottom stratum that is seldom sampled by a typical MOCNESS tow. The prototype SyPRID and its next generations will enable studies of plankton or other particulate distributions associated with localized physico-chemical strata in the water column or above patchy habitats on the seafloor.

  11. The UK Biobank sample handling and storage validation studies.

    Science.gov (United States)

    Peakman, Tim C; Elliott, Paul

    2008-04-01

    and aims UK Biobank is a large prospective study in the United Kingdom to investigate the role of genetic factors, environmental exposures and lifestyle in the causes of major diseases of late and middle age. It involves the collection of blood and urine from 500 000 individuals aged between 40 and 69 years. How the samples are collected, processed and stored will have a major impact on the future scientific usefulness of the UK Biobank resource. A series of validation studies was recommended to test the robustness of the draft sample handling and storage protocol. Samples of blood and urine were collected from 40 healthy volunteers and either processed immediately according to the protocol or maintained at specified temperatures (4 degrees C for all tubes with the exception of vacutainers containing acid citrate dextrose that were maintained at 18 degrees C) for 12, 24 or 36 h prior to processing. A further sample was maintained for 24 h at 4 degrees C, processed and the aliquots frozen at -80 degrees C for 20 days and then thawed under controlled conditions. The stability of the samples was compared for the different times in a wide variety of assays. The samples maintained at 4 degrees C were stable for at least 24 h after collection for a wide range of assays. Small but significant changes were observed in metabonomic studies in samples maintained at 4 degrees C for 36 h. There was no degradation of the samples for a range of biochemical assays after short-term freezing and thawing under controlled conditions. Whole blood maintained at 18 degrees C for 24 h in vacutainers containing acid citrate dextrose is suitable for viral immortalization techniques. The validation studies reported in this supplement provide justification for the sample handling and storage procedures adopted in the UK Biobank project.

  12. Heritability of psoriasis in a large twin sample

    DEFF Research Database (Denmark)

    Lønnberg, Ann Sophie; Skov, Liselotte; Skytthe, A

    2013-01-01

    AIM: To study the concordance of psoriasis in a population-based twin sample. METHODS: Data on psoriasis in 10,725 twin pairs, 20-71 years of age, from the Danish Twin Registry was collected via a questionnaire survey. The concordance and heritability of psoriasis were estimated. RESULTS: In total...

  13. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  14. A large sample of Kohonen selected E+A (post-starburst) galaxies from the Sloan Digital Sky Survey

    Science.gov (United States)

    Meusinger, H.; Brünecke, J.; Schalldach, P.; in der Au, A.

    2017-01-01

    Context. The galaxy population in the contemporary Universe is characterised by a clear bimodality, blue galaxies with significant ongoing star formation and red galaxies with only a little. The migration between the blue and the red cloud of galaxies is an issue of active research. Post starburst (PSB) galaxies are thought to be observed in the short-lived transition phase. Aims: We aim to create a large sample of local PSB galaxies from the Sloan Digital Sky Survey (SDSS) to study their characteristic properties, particularly morphological features indicative of gravitational distortions and indications for active galactic nuclei (AGNs). Another aim is to present a tool set for an efficient search in a large database of SDSS spectra based on Kohonen self-organising maps (SOMs). Methods: We computed a huge Kohonen SOM for ∼106 spectra from SDSS data release 7. The SOM is made fully available, in combination with an interactive user interface, for the astronomical community. We selected a large sample of PSB galaxies taking advantage of the clustering behaviour of the SOM. The morphologies of both PSB galaxies and randomly selected galaxies from a comparison sample in SDSS Stripe 82 (S82) were inspected on deep co-added SDSS images to search for indications of gravitational distortions. We used the Portsmouth galaxy property computations to study the evolutionary stage of the PSB galaxies and archival multi-wavelength data to search for hidden AGNs. Results: We compiled a catalogue of 2665 PSB galaxies with redshifts z 3 Å and z cloud, in agreement with the idea that PSB galaxies represent the transitioning phase between actively and passively evolving galaxies. The relative frequency of distorted PSB galaxies is at least 57% for EW(Hδ) > 5 Å, significantly higher than in the comparison sample. The search for AGNs based on conventional selection criteria in the radio and MIR results in a low AGN fraction of ∼2-3%. We confirm an MIR excess in the mean SED of

  15. Detecting the Land-Cover Changes Induced by Large-Physical Disturbances Using Landscape Metrics, Spatial Sampling, Simulation and Spatial Analysis

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2009-08-01

    Full Text Available The objectives of the study are to integrate the conditional Latin Hypercube Sampling (cLHS, sequential Gaussian simulation (SGS and spatial analysis in remotely sensed images, to monitor the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial heterogeneity and variability. The multiple NDVI images demonstrate that spatial patterns of disturbed landscapes were successfully delineated by spatial analysis such as variogram, Moran’I and landscape metrics in the study area. The hybrid method delineates the spatial patterns and spatial variability of landscapes caused by these large disturbances. The cLHS approach is applied to select samples from Normalized Difference Vegetation Index (NDVI images from SPOT HRV images in the Chenyulan watershed of Taiwan, and then SGS with sufficient samples is used to generate maps of NDVI images. In final, the NDVI simulated maps are verified using indexes such as the correlation coefficient and mean absolute error (MAE. Therefore, the statistics and spatial structures of multiple NDVI images present a very robust behavior, which advocates the use of the index for the quantification of the landscape spatial patterns and land cover change. In addition, the results transferred by Open Geospatial techniques can be accessed from web-based and end-user applications of the watershed management.

  16. CORRELATION ANALYSIS OF A LARGE SAMPLE OF NARROW-LINE SEYFERT 1 GALAXIES: LINKING CENTRAL ENGINE AND HOST PROPERTIES

    International Nuclear Information System (INIS)

    Xu Dawei; Komossa, S.; Wang Jing; Yuan Weimin; Zhou Hongyan; Lu Honglin; Li Cheng; Grupe, Dirk

    2012-01-01

    We present a statistical study of a large, homogeneously analyzed sample of narrow-line Seyfert 1 (NLS1) galaxies, accompanied by a comparison sample of broad-line Seyfert 1 (BLS1) galaxies. Optical emission-line and continuum properties are subjected to correlation analyses, in order to identify the main drivers of the correlation space of active galactic nuclei (AGNs), and of NLS1 galaxies in particular. For the first time, we have established the density of the narrow-line region as a key parameter in Eigenvector 1 space, as important as the Eddington ratio L/L Edd . This is important because it links the properties of the central engine with the properties of the host galaxy, i.e., the interstellar medium (ISM). We also confirm previously found correlations involving the line width of Hβ and the strength of the Fe II and [O III] λ5007 emission lines, and we confirm the important role played by L/L Edd in driving the properties of NLS1 galaxies. A spatial correlation analysis shows that large-scale environments of the BLS1 and NLS1 galaxies of our sample are similar. If mergers are rare in our sample, accretion-driven winds, on the one hand, or bar-driven inflows, on the other hand, may account for the strong dependence of Eigenvector 1 on ISM density.

  17. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A confirmatory holding time study for purgeable VOCs in water samples

    International Nuclear Information System (INIS)

    West, O.R.; Bayne, C.K.; Siegrist, R.L.; Holden, W.H.; Bottrell, D.W.

    1996-01-01

    Analyte stability during pre-analytical storage is essential to the accurate quantification contaminants in environmental samples. This is particularly true for volatile organic compounds (VOCS) which can easily volatilize and/or degrade during sample storage. Recognizing this, regulatory agencies require water samples be collected in vials without headspace and stored at 4 degrees C, and that analyses be conducted within 14 days, 2048 even if samples are acid-preserved. Since the selection of a 14-day holding time was largely arbitrary, the appropriateness of this requirement must be re-evaluated. The goal of the study described here was to provide regulatory agencies with the necessary data to extend the maximum holding time for properly preserved VOC water samples to 28 days

  19. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  20. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  1. Sample preparation and analysis of large 238PuO2 and ThO2 spheres

    International Nuclear Information System (INIS)

    Wise, R.L.; Selle, J.E.

    1975-01-01

    A program was initiated to determine the density gradient across a large spherical 238 PuO 2 sample produced by vacuum hot pressing. Due to the high thermal output of the ceramic a thin section was necessary to prevent overheating of the plastic mount. Techniques were developed for cross sectioning, mounting, grinding, and polishing of the sample. The polished samples were then analyzed on a quantitative image analyzer to determine the density as a function of location across the sphere. The techniques for indexing, analyzing, and reducing the data are described. Typical results obtained on a ThO 2 simulant sphere are given

  2. Self-Esteem Development across the Life Span: A Longitudinal Study with a Large Sample from Germany

    Science.gov (United States)

    Orth, Ulrich; Maes, Jürgen; Schmitt, Manfred

    2015-01-01

    The authors examined the development of self-esteem across the life span. Data came from a German longitudinal study with 3 assessments across 4 years of a sample of 2,509 individuals ages 14 to 89 years. The self-esteem measure used showed strong measurement invariance across assessments and birth cohorts. Latent growth curve analyses indicated…

  3. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  4. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  5. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-01

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.

  6. Effects of large volume injection of aliphatic alcohols as sample diluents on the retention of low hydrophobic solutes in reversed-phase liquid chromatography.

    Science.gov (United States)

    David, Victor; Galaon, Toma; Aboul-Enein, Hassan Y

    2014-01-03

    Recent studies showed that injection of large volume of hydrophobic solvents used as sample diluents could be applied in reversed-phase liquid chromatography (RP-LC). This study reports a systematic research focused on the influence of a series of aliphatic alcohols (from methanol to 1-octanol) on the retention process in RP-LC, when large volumes of sample are injected on the column. Several model analytes with low hydrophobic character were studied by RP-LC process, for mobile phases containing methanol or acetonitrile as organic modifiers in different proportions with aqueous component. It was found that starting with 1-butanol, the aliphatic alcohols can be used as sample solvents and they can be injected in high volumes, but they may influence the retention factor and peak shape of the dissolved solutes. The dependence of the retention factor of the studied analytes on the injection volume of these alcohols is linear, with a decrease of its value as the sample volume is increased. The retention process in case of injecting up to 200μL of upper alcohols is dependent also on the content of the organic modifier (methanol or acetonitrile) in mobile phase. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Characterisation of large zooplankton sampled with two different gears during midwinter in Rijpfjorden, Svalbard

    Directory of Open Access Journals (Sweden)

    Błachowiak-Samołyk Katarzyna

    2017-12-01

    Full Text Available During a midwinter cruise north of 80°N to Rijpfjorden, Svalbard, the composition and vertical distribution of the zooplankton community were studied using two different samplers 1 a vertically hauled multiple plankton sampler (MPS; mouth area 0.25 m2, mesh size 200 μm and 2 a horizontally towed Methot Isaacs Kidd trawl (MIK; mouth area 3.14 m2, mesh size 1500 μm. Our results revealed substantially higher species diversity (49 taxa than if a single sampler (MPS: 38 taxa, MIK: 28 had been used. The youngest stage present (CIII of Calanus spp. (including C. finmarchicus and C. glacialis was sampled exclusively by the MPS, and the frequency of CIV copepodites in MPS was double that than in MIK samples. In contrast, catches of the CV-CVI copepodites of Calanus spp. were substantially higher in the MIK samples (3-fold and 5-fold higher for adult males and females, respectively. The MIK sampling clearly showed that the highest abundances of all three Thysanoessa spp. were in the upper layers, although there was a tendency for the larger-sized euphausiids to occur deeper. Consistent patterns for the vertical distributions of the large zooplankters (e.g. ctenophores, euphausiids collected by the MPS and MIK samplers provided more complete data on their abundances and sizes than obtained by the single net. Possible mechanisms contributing to the observed patterns of distribution, e.g. high abundances of both Calanus spp. and their predators (ctenophores and chaetognaths in the upper water layers during midwinter are discussed.

  8. Pattern transfer on large samples using a sub-aperture reactive ion beam

    Energy Technology Data Exchange (ETDEWEB)

    Miessler, Andre; Mill, Agnes; Gerlach, Juergen W.; Arnold, Thomas [Leibniz-Institut fuer Oberflaechenmodifizierung (IOM), Permoserstrasse 15, D-04318 Leipzig (Germany)

    2011-07-01

    In comparison to sole Ar ion beam sputtering Reactive Ion Beam Etching (RIBE) reveals the main advantage of increasing the selectivity for different kind of materials due to chemical contributions during the material removal. Therefore RIBE is qualified to be an excellent candidate for pattern transfer applications. The goal of the present study is to apply a sub-aperture reactive ion beam for pattern transfer on large fused silica samples. Concerning this matter, the etching behavior in the ion beam periphery plays a decisive role. Using CF{sub 4} as reactive gas, XPS measurements of the modified surface exposes impurities like Ni, Fe and Cr, which belongs to chemically eroded material of the plasma pot as well as an accumulation of carbon (up to 40 atomic percent) in the beam periphery, respectively. The substitution of CF{sub 4} by NF{sub 3} as reactive gas reveals a lot of benefits: more stable ion beam conditions in combination with a reduction of the beam size down to a diameter of 5 mm and a reduced amount of the Ni, Fe and Cr contaminations. However, a layer formation of silicon nitride handicaps the chemical contribution of the etching process. These negative side effects influence the transfer of trench structures on quartz by changing the selectivity due to altered chemical reaction of the modified resist layer. Concerning this we investigate the pattern transfer on large fused silica plates using NF{sub 3}-sub-aperture RIBE.

  9. Cosmological implications of a large complete quasar sample.

    Science.gov (United States)

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  10. No thank you, not today": Supporting Ethical and Professional Relationships in Large Qualitative Studies

    Directory of Open Access Journals (Sweden)

    Lisa J. Blodgett

    2005-09-01

    Full Text Available Based on an ongoing research study of the development of self-regulation in early childhood (BOYER, 2005a, 2005b; BOYER, BLODGETT, & TURK, 2004, this work explores both the ethical and professional considerations of participant sampling in a large qualitative study. The study involved 146 families of preschool children and 15 educators across 7 preschools. Data collection included 30-45 minute audiotaped individual interviews, twenty-eight 90-120 minute audiotaped focus group sessions, and 30 minute videotaped footage of each child's natural play. The challenge of gaining informed consent and ongoing participation within a large study has been considered in the literature (GALL, GALL, & BORG, 2005. In qualitative studies the participants are selected purposefully because they will be par­ticularly informative about the topic (CRESWELL, 2002. This is a challenge for qualitative re­searchers seeking maximal participation and large sample sizes because volunteer participants "tend to be better educated, higher socioeconomically, more intelligent, more in need of social approval, more sociable, more unconventional, less auth­ori­tarian, and less conforming than nonvolunteers" (MCMILLAN, 2004, p.116. This paper provides a response to these sampling challenges and ad­vo­cates for the building of community relationships based on ethical, interpersonal and professional foundations. URN: urn:nbn:de:0114-fqs0503353

  11. Is Business Failure Due to Lack of Effort? Empirical Evidence from a Large Administrative Sample

    NARCIS (Netherlands)

    Ejrnaes, M.; Hochguertel, S.

    2013-01-01

    Does insurance provision reduce entrepreneurs' effort to avoid business failure? We exploit unique features of the voluntary Danish unemployment insurance (UI) scheme, that is available to the self-employed. Using a large sample of self-employed individuals, we estimate the causal effect of

  12. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  13. Gene coexpression measures in large heterogeneous samples using count statistics.

    Science.gov (United States)

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  14. Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging

    Science.gov (United States)

    Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.

    2017-08-01

    Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.

  15. Using Co-Occurrence to Evaluate Belief Coherence in a Large Non Clinical Sample

    Science.gov (United States)

    Pechey, Rachel; Halligan, Peter

    2012-01-01

    Much of the recent neuropsychological literature on false beliefs (delusions) has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow “cohere” with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs). The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural). Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian’s coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance. PMID:23155383

  16. Using co-occurrence to evaluate belief coherence in a large non clinical sample.

    Directory of Open Access Journals (Sweden)

    Rachel Pechey

    Full Text Available Much of the recent neuropsychological literature on false beliefs (delusions has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow "cohere" with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs. The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural. Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian's coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance.

  17. Waardenburg syndrome: Novel mutations in a large Brazilian sample.

    Science.gov (United States)

    Bocángel, Magnolia Astrid Pretell; Melo, Uirá Souto; Alves, Leandro Ucela; Pardono, Eliete; Lourenço, Naila Cristina Vilaça; Marcolino, Humberto Vicente Cezar; Otto, Paulo Alberto; Mingroni-Netto, Regina Célia

    2018-06-01

    This paper deals with the molecular investigation of Waardenburg syndrome (WS) in a sample of 49 clinically diagnosed probands (most from southeastern Brazil), 24 of them having the type 1 (WS1) variant (10 familial and 14 isolated cases) and 25 being affected by the type 2 (WS2) variant (five familial and 20 isolated cases). Sequential Sanger sequencing of all coding exons of PAX3, MITF, EDN3, EDNRB, SOX10 and SNAI2 genes, followed by CNV detection by MLPA of PAX3, MITF and SOX10 genes in selected cases revealed many novel pathogenic variants. Molecular screening, performed in all patients, revealed 19 causative variants (19/49 = 38.8%), six of them being large whole-exon deletions detected by MLPA, seven (four missense and three nonsense substitutions) resulting from single nucleotide substitutions (SNV), and six representing small indels. A pair of dizygotic affected female twins presented the c.430delC variant in SOX10, but the mutation, imputed to gonadal mosaicism, was not found in their unaffected parents. At least 10 novel causative mutations, described in this paper, were found in this Brazilian sample. Copy-number-variation detected by MLPA identified the causative mutation in 12.2% of our cases, corresponding to 31.6% of all causative mutations. In the majority of cases, the deletions were sporadic, since they were not present in the parents of isolated cases. Our results, as a whole, reinforce the fact that the screening of copy-number-variants by MLPA is a powerful tool to identify the molecular cause in WS patients. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  18. An examination of smoking behavior and opinions about smoke-free environments in a large sample of sexual and gender minority community members.

    Science.gov (United States)

    McElroy, Jane A; Everett, Kevin D; Zaniletti, Isabella

    2011-06-01

    The purpose of this study is to more completely quantify smoking rate and support for smoke-free policies in private and public environments from a large sample of self-identified sexual and gender minority (SGM) populations. A targeted sampling strategy recruited participants from 4 Missouri Pride Festivals and online surveys targeted to SGM populations during the summer of 2008. A 24-item survey gathered information on gender and sexual orientation, smoking status, and questions assessing behaviors and preferences related to smoke-free policies. The project recruited participants through Pride Festivals (n = 2,676) and Web-based surveys (n = 231) representing numerous sexual and gender orientations and the racial composite of the state of Missouri. Differences were found between the Pride Festivals sample and the Web-based sample, including smoking rates, with current smoking for the Web-based sample (22%) significantly less than the Pride Festivals sample (37%; p times more likely to be current smokers compared with the study's heterosexual group (n = 436; p = .005). Statistically fewer SGM racial minorities (33%) are current smokers compared with SGM Whites (37%; p = .04). Support and preferences for public and private smoke-free environments were generally low in the SGM population. The strategic targeting method achieved a large and diverse sample. The findings of high rates of smoking coupled with generally low levels of support for smoke-free public policies in the SGM community highlight the need for additional research to inform programmatic attempts to reduce tobacco use and increase support for smoke-free environments.

  19. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  20. Acceptance of Driverless Vehicles : Results from a Large Cross-National Questionnaire Study

    NARCIS (Netherlands)

    Nordhoff, S.; de Winter, J.C.F.; Kyriakidis, M.; van Arem, B.; Happee, R.

    2018-01-01

    Shuttles that operate without an onboard driver are currently being developed and tested in various projects worldwide. However, there is a paucity of knowledge on the determinants of acceptance of driverless shuttles in large cross-national samples. In the present study, we surveyed 10,000

  1. Specific Antibodies Reacting with SV40 Large T Antigen Mimotopes in Serum Samples of Healthy Subjects.

    Directory of Open Access Journals (Sweden)

    Mauro Tognon

    Full Text Available Simian Virus 40, experimentally assayed in vitro in different animal and human cells and in vivo in rodents, was classified as a small DNA tumor virus. In previous studies, many groups identified Simian Virus 40 sequences in healthy individuals and cancer patients using PCR techniques, whereas others failed to detect the viral sequences in human specimens. These conflicting results prompted us to develop a novel indirect ELISA with synthetic peptides, mimicking Simian Virus 40 capsid viral protein antigens, named mimotopes. This immunologic assay allowed us to investigate the presence of serum antibodies against Simian Virus 40 and to verify whether Simian Virus 40 is circulating in humans. In this investigation two mimotopes from Simian Virus 40 large T antigen, the viral replication protein and oncoprotein, were employed to analyze for specific reactions to human sera antibodies. This indirect ELISA with synthetic peptides from Simian Virus 40 large T antigen was used to assay a new collection of serum samples from healthy subjects. This novel assay revealed that serum antibodies against Simian Virus 40 large T antigen mimotopes are detectable, at low titer, in healthy subjects aged from 18-65 years old. The overall prevalence of reactivity with the two Simian Virus 40 large T antigen peptides was 20%. This new ELISA with two mimotopes of the early viral regions is able to detect in a specific manner Simian Virus 40 large T antigen-antibody responses.

  2. An examination of the RCMAS-2 scores across gender, ethnic background, and age in a large Asian school sample.

    Science.gov (United States)

    Ang, Rebecca P; Lowe, Patricia A; Yusof, Noradlin

    2011-12-01

    The present study investigated the factor structure, reliability, convergent and discriminant validity, and U.S. norms of the Revised Children's Manifest Anxiety Scale, Second Edition (RCMAS-2; C. R. Reynolds & B. O. Richmond, 2008a) scores in a Singapore sample of 1,618 school-age children and adolescents. Although there were small statistically significant differences in the average RCMAS-2 T scores found across various demographic groupings, on the whole, the U.S. norms appear adequate for use in the Asian Singapore sample. Results from item bias analyses suggested that biased items detected had small effects and were counterbalanced across gender and ethnicity, and hence, their relative impact on test score variation appears to be minimal. Results of factor analyses on the RCMAS-2 scores supported the presence of a large general anxiety factor, the Total Anxiety factor, and the 5-factor structure found in U.S. samples was replicated. Both the large general anxiety factor and the 5-factor solution were invariant across gender and ethnic background. Internal consistency estimates ranged from adequate to good, and 2-week test-retest reliability estimates were comparable to previous studies. Evidence providing support for convergent and discriminant validity of the RCMAS-2 scores was also found. Taken together, findings provide additional cross-cultural evidence of the appropriateness and usefulness of the RCMAS-2 as a measure of anxiety in Asian Singaporean school-age children and adolescents.

  3. Moving into a new era of periodontal genetic studies: relevance of large case-control samples using severe phenotypes for genome-wide association studies

    NARCIS (Netherlands)

    Vaithilingam, R.D.; Saffi, S.H.; Baharuddin, N.A.; Ng, C.C.; Cheong, S.C.; Bartold, P.M.; Schaefer, A.S.; Loos, B.G.

    2014-01-01

    Studies to elucidate the role of genetics as a risk factor for periodontal disease have gone through various phases. In the majority of cases, the initial ‘hypothesis-dependent’ candidate-gene polymorphism studies did not report valid genetic risk loci. Following a large-scale replication study,

  4. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  5. Analysis of reflection-peak wavelengths of sampled fiber Bragg gratings with large chirp.

    Science.gov (United States)

    Zou, Xihua; Pan, Wei; Luo, Bin

    2008-09-10

    The reflection-peak wavelengths (RPWs) in the spectra of sampled fiber Bragg gratings with large chirp (SFBGs-LC) are theoretically investigated. Such RPWs are divided into two parts, the RPWs of equivalent uniform SFBGs (U-SFBGs) and the wavelength shift caused by the large chirp in the grating period (CGP). We propose a quasi-equivalent transform to deal with the CGP. That is, the CGP is transferred into quasi-equivalent phase shifts to directly derive the Fourier transform of the refractive index modulation. Then, in the case of both the direct and the inverse Talbot effect, the wavelength shift is obtained from the Fourier transform. Finally, the RPWs of SFBGs-LC can be achieved by combining the wavelength shift and the RPWs of equivalent U-SFBGs. Several simulations are shown to numerically confirm these predicted RPWs of SFBGs-LC.

  6. Application of k0-based internal monostandard NAA for large sample analysis of clay pottery. As a part of inter comparison exercise

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2014-01-01

    As a part of inter comparison exercise of an IAEA Coordinated Research Project on large sample neutron activation analysis, a large size and non standard geometry size pottery replica (obtained from Peru) was analyzed by k 0 -based internal monostandard neutron activation analysis (IM-NAA). Two large size sub samples (0.40 and 0.25 kg) were irradiated at graphite reflector position of AHWR Critical Facility in BARC, Trombay, Mumbai, India. Small samples (100-200 mg) were also analyzed by IM-NAA for comparison purpose. Radioactive assay was carried out using a 40 % relative efficiency HPGe detector. To examine homogeneity of the sample, counting was also carried out using X-Z rotary scanning unit. In situ relative detection efficiency was evaluated using gamma rays of the activation products in the irradiated sample in the energy range of 122-2,754 keV. Elemental concentration ratios with respect to Na of small size (100 mg mass) as well as large size (15 and 400 g) samples were used to check the homogeneity of the samples. Concentration ratios of 18 elements such as K, Sc, Cr, Mn, Fe, Co, Zn, As, Rb, Cs, La, Ce, Sm, Eu, Yb, Lu, Hf and Th with respect to Na (internal mono standard) were calculated using IM-NAA. Absolute concentrations were arrived at for both large and small samples using Na concentration, obtained from relative method of NAA. The percentage combined uncertainties at ±1 s confidence limit on the determined values were in the range of 3-9 %. Two IAEA reference materials SL-1 and SL-3 were analyzed by IM-NAA to evaluate accuracy of the method. (author)

  7. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    International Nuclear Information System (INIS)

    Kasviki, K.; Stamatelatos, I.E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-01-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv

  8. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kasviki, K. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece); Stamatelatos, I.E. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece)], E-mail: ion@ipta.demokritos.gr; Yannakopoulou, E. [Institute of Physical Chemistry, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Papadopoulou, P. [Institute of Technology of Agricultural Products, NAGREF, Lycovrissi, Attikis 14123 (Greece); Kalef-Ezra, J. [Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece)

    2007-10-15

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  9. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  10. Large sample NAA of a pottery replica utilizing thermal neutron flux at AHWR critical facility and X-Z rotary scanning unit

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2013-01-01

    Large sample neutron activation analysis (LSNAA) of a clay pottery replica from Peru was carried out using low neutron flux graphite reflector position of Advanced Heavy Water Reactor (AHWR) critical facility. This work was taken up as a part of inter-comparison exercise under IAEA CRP on LSNAA of archaeological objects. Irradiated large size sample, placed on an X-Z rotary scanning unit, was assayed using a 40% relative efficiency HPGe detector. The k 0 -based internal monostandard NAA (IM-NAA) in conjunction with insitu relative detection efficiency was used to calculate concentration ratios of 12 elements with respect to Na. Analyses of both small and large size samples were carried out to check homogeneity and to arrive at absolute concentrations. (author)

  11. Geochemical and mineralogical study of selected weathered samples from Olkiluoto site

    International Nuclear Information System (INIS)

    Lindberg, A.

    2009-02-01

    Optical microscopy, chemical analyses and X-ray diffraction method were used to study the influence of weathering from 11 drill core samples from shallow depths (< 25 m). The samples, 4 to 22 cm in length were drilled from Olkiluoto study site, Eurajoki, and they represent the common rock types of local bedrock: mica gneiss, tonalitic and granodioritic gneiss. Two of the samples were macroscopically unweathered and 9 of them were remarkably altered. The alteration was shown as porosity, the abundance of chlorite instead of biotite and pink, unclear feldspars. Many samples also contained red-brown hematite and fractures, some of them coated with secondary minerals, even clay. Microscopically the most visible feature of weathering was the total alteration of plagioclase and cordierite to sericite. In many samples also biotite was richly altered to chlorite and opaque minerals. Microfractures were common and they were filled by hematite, kaolinite and fine-grained muscovite (sericite). Hematite was, in some cases, also largely replacing the weathered minerals, feldspars and cordierite. Chemical alteration was not clear, because the alteration of main minerals have produced secondary minerals with almost the same chemical composition without any reasonable depleting or enrichment of certain elements. X-ray diffraction determination of samples proved, that often plagioclase was replaced by mica and biotite by chlorite. In some cases the samples contained products of chemical weathering, kaolinite and smectite. (orig.)

  12. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  13. Construct validity of the Groningen Frailty Indicator established in a large sample of home-dwelling elderly persons : Evidence of stability across age and gender

    NARCIS (Netherlands)

    Peters, L. L.; Boter, H.; Burgerhof, J. G. M.; Slaets, J. P. J.; Buskens, E.

    Background: The primary objective of the present study was to evaluate the validity of the Groningen frailty Indicator (GFI) in a sample of Dutch elderly persons participating in LifeLines, a large population-based cohort study. Additional aims were to assess differences between frail and non-frail

  14. Sampling strategies for tropical forest nutrient cycling studies: a case study in São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    G. Sparovek

    1997-12-01

    Full Text Available The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P, and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation. A natural remnant forest in the West of São Paulo State (Brazil was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete

  15. Implicit and explicit anti-fat bias among a large sample of medical doctors by BMI, race/ethnicity and gender.

    Directory of Open Access Journals (Sweden)

    Janice A Sabin

    Full Text Available Overweight patients report weight discrimination in health care settings and subsequent avoidance of routine preventive health care. The purpose of this study was to examine implicit and explicit attitudes about weight among a large group of medical doctors (MDs to determine the pervasiveness of negative attitudes about weight among MDs. Test-takers voluntarily accessed a public Web site, known as Project Implicit®, and opted to complete the Weight Implicit Association Test (IAT (N = 359,261. A sub-sample identified their highest level of education as MD (N = 2,284. Among the MDs, 55% were female, 78% reported their race as white, and 62% had a normal range BMI. This large sample of test-takers showed strong implicit anti-fat bias (Cohen's d = 1.0. MDs, on average, also showed strong implicit anti-fat bias (Cohen's d = 0.93. All test-takers and the MD sub-sample reported a strong preference for thin people rather than fat people or a strong explicit anti-fat bias. We conclude that strong implicit and explicit anti-fat bias is as pervasive among MDs as it is among the general public. An important area for future research is to investigate the association between providers' implicit and explicit attitudes about weight, patient reports of weight discrimination in health care, and quality of care delivered to overweight patients.

  16. Study population, questionnaire, data management and sample description

    Directory of Open Access Journals (Sweden)

    Chiara de Waure

    2015-06-01

    Full Text Available INTRODUCTION: This article describes methodological issues of the "Sportello Salute Giovani" project ("Youth Health Information Desk", a multicenter study aimed at assessing the health status and attitudes and behaviours of university students in Italy. MATERIALS AND METHODS: The questionnaire used to carry out the study was adapted from the Italian health behaviours in school-aged children (HBSC project and consisted of 93 items addressing: demographics; nutritional habits and status; physical activity; lifestyles; reproductive and preconception health; health and satisfaction of life; attitudes and behaviours toward academic study and new technologies. The questionnaire was administered to a pool of 12 000 students from 18 to 30 years of age who voluntary decided to participate during classes held at different Italian faculties or at the three "Sportello Salute Giovani" centers which were established in the three sites of the Università Cattolica del Sacro Cuore (Catholic University of the Sacred Heart of Rome. RESULTS: The final study sample was composed by 8516 university students. The mean age of responders was 22.2 (Standard Deviation 2.0 and 5702 (67.0% were females. According to the distribution in age classes, 3601 (43.3% belonged to the 18-21 one, 3796 (44.5% to the 22-24 class and 1019 (12.2% to the 25-30 class. With respect to socio-economic status, data were available for 8410 responders and showed that 50.3% of students belonged to the middle class. DISCUSSION: The project took into consideration a large number of individuals from different regions of the country and therefore may be considered representative of the general population of Italian university students. Furthermore, it is the first to address, at the same time, several issues, in particular attitudes and behaviours toward health, in Italian university students. CONCLUSION: The analysis of data from such a large sample of university students sets the basis for

  17. "Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2011-09-01

    Full Text Available Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found not to have modeled the analyses to take account of the complex sample (Johnson & Elliott, 1998 even when publishing in highly-regarded journals. It is well known that failure to appropriately model the complex sample can substantially bias the results of the analysis. Examples presented in this paper highlight the risk of error of inference and mis-estimation of parameters from failure to analyze these data sets appropriately.

  18. Large scale sample management and data analysis via MIRACLE

    DEFF Research Database (Denmark)

    Block, Ines; List, Markus; Pedersen, Marlene Lemvig

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. In the past years the technology advanced based on improved methods and protocols concerning sample preparation and printing, antibody selection, optimization...... of staining conditions and mode of signal analysis. However, the sample management and data analysis still poses challenges because of the high number of samples, sample dilutions, customized array patterns, and various programs necessary for array construction and data processing. We developed...... a comprehensive and user-friendly web application called MIRACLE (MIcroarray R-based Analysis of Complex Lysate Experiments), which bridges the gap between sample management and array analysis by conveniently keeping track of the sample information from lysate preparation, through array construction and signal...

  19. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students

    OpenAIRE

    Song Wang; Ming Zhou; Taolin Chen; Xun Yang; Guangxiang Chen; Meiyun Wang; Qiyong Gong

    2017-01-01

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphome...

  20. Crowdsourcing for large-scale mosquito (Diptera: Culicidae) sampling

    Science.gov (United States)

    Sampling a cosmopolitan mosquito (Diptera: Culicidae) species throughout its range is logistically challenging and extremely resource intensive. Mosquito control programmes and regional networks operate at the local level and often conduct sampling activities across much of North America. A method f...

  1. DNA methylation analysis from saliva samples for epidemiological studies.

    Science.gov (United States)

    Nishitani, Shota; Parets, Sasha E; Haas, Brian W; Smith, Alicia K

    2018-06-18

    Saliva is a non-invasive, easily accessible tissue, which is regularly collected in large epidemiological studies to examine genetic questions. Recently, it is becoming more common to use saliva to assess DNA methylation. However, DNA extracted from saliva is a mixture of both bacterial and human DNA derived from epithelial and immune cells in the mouth. Thus, there are unique challenges to using salivary DNA in methylation studies that can influence data quality. This study assesses: (1) quantification of human DNA after extraction; (2) delineation of human and bacterial DNA; (3) bisulfite conversion (BSC); (4) quantification of BSC DNA; (5) PCR amplification of BSC DNA from saliva and; (6) quantitation of DNA methylation with a targeted assay. The framework proposed will allow saliva samples to be more widely used in targeted epigenetic studies.

  2. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.

  3. Post-traumatic stress syndrome in a large sample of older adults: determinants and quality of life.

    Science.gov (United States)

    Lamoureux-Lamarche, Catherine; Vasiliadis, Helen-Maria; Préville, Michel; Berbiche, Djamal

    2016-01-01

    The aims of this study are to assess in a sample of older adults consulting in primary care practices the determinants and quality of life associated with post-traumatic stress syndrome (PTSS). Data used came from a large sample of 1765 community-dwelling older adults who were waiting to receive health services in primary care clinics in the province of Quebec. PTSS was measured with the PTSS scale. Socio-demographic and clinical characteristics were used as potential determinants of PTSS. Quality of life was measured with the EuroQol-5D-3L (EQ-5D-3L) EQ-Visual Analog Scale and the Satisfaction With Your Life Scale. Multivariate logistic and linear regression models were used to study the presence of PTSS and different measures of health-related quality of life and quality of life as a function of study variables. The six-month prevalence of PTSS was 11.0%. PTSS was associated with age, marital status, number of chronic disorders and the presence of an anxiety disorder. PTSS was also associated with the EQ-5D-3L and the Satisfaction with Your Life Scale. PTSS is prevalent in patients consulting in primary care practices. Primary care physicians should be aware that PTSS is also associated with a decrease in quality of life, which can further negatively impact health status.

  4. A Study of Assimilation Bias in Name-Based Sampling of Migrants

    Directory of Open Access Journals (Sweden)

    Schnell Rainer

    2014-06-01

    Full Text Available The use of personal names for screening is an increasingly popular sampling technique for migrant populations. Although this is often an effective sampling procedure, very little is known about the properties of this method. Based on a large German survey, this article compares characteristics of respondents whose names have been correctly classified as belonging to a migrant population with respondentswho aremigrants and whose names have not been classified as belonging to a migrant population. Although significant differences were found for some variables even with some large effect sizes, the overall bias introduced by name-based sampling (NBS is small as long as procedures with small false-negative rates are employed.

  5. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  6. Spatio-temporal foreshock activity during stick-slip experiments of large rock samples

    Science.gov (United States)

    Tsujimura, Y.; Kawakata, H.; Fukuyama, E.; Yamashita, F.; Xu, S.; Mizoguchi, K.; Takizawa, S.; Hirano, S.

    2016-12-01

    Foreshock activity has sometimes been reported for large earthquakes, and has been roughly classified into the following two classes. For shallow intraplate earthquakes, foreshocks occurred in the vicinity of the mainshock hypocenter (e.g., Doi and Kawakata, 2012; 2013). And for intraplate subduction earthquakes, foreshock hypocenters migrated toward the mainshock hypocenter (Kato, et al., 2012; Yagi et al., 2014). To understand how foreshocks occur, it is useful to investigate the spatio-temporal activities of foreshocks in the laboratory experiments under controlled conditions. We have conducted stick-slip experiments by using a large-scale biaxial friction apparatus at NIED in Japan (e.g., Fukuyama et al., 2014). Our previous results showed that stick-slip events repeatedly occurred in a run, but only those later events were preceded by foreshocks. Kawakata et al. (2014) inferred that the gouge generated during the run was an important key for foreshock occurrence. In this study, we proceeded to carry out stick-slip experiments of large rock samples whose interface (fault plane) is 1.5 meter long and 0.5 meter wide. After some runs to generate fault gouge between the interface. In the current experiments, we investigated spatio-temporal activities of foreshocks. We detected foreshocks from waveform records of 3D array of piezo-electric sensors. Our new results showed that more than three foreshocks (typically about twenty) had occurred during each stick-slip event, in contrast to the few foreshocks observed during previous experiments without pre-existing gouge. Next, we estimated the hypocenter locations of the stick-slip events, and found that they were located near the opposite end to the loading point. In addition, we observed a migration of foreshock hypocenters toward the hypocenter of each stick-slip event. This suggests that the foreshock activity observed in our current experiments was similar to that for the interplate earthquakes in terms of the

  7. Large scale anisotropy studies with the Auger Observatory

    International Nuclear Information System (INIS)

    Santos, E.M.; Letessier-Selvon, A.

    2006-01-01

    With the increasing Auger surface array data sample of the highest energy cosmic rays, large scale anisotropy studies at this part of the spectrum become a promising path towards the understanding of the origin of ultra-high energy cosmic particles. We describe the methods underlying the search for distortions in the cosmic rays arrival directions over large angular scales, that is, bigger than those commonly employed in the search for correlations with point-like sources. The widely used tools, known as coverage maps, are described and some of the issues involved in their calculations are presented through Monte Carlo based studies. Coverage computation requires a deep knowledge on the local detection efficiency, including the influence of weather parameters like temperature and pressure. Particular attention is devoted to a new proposed method to extract the coverage, based upon the assumption of time factorization of an extensive air shower detector acceptance. We use Auger monitoring data to test the goodness of such a hypothesis. We finally show the necessity of using more than one coverage to extract any possible anisotropic pattern on the sky, by pointing to some of the biases present in commonly used methods based, for example, on the scrambling of the UTC arrival times for each event. (author)

  8. Studies of $b$-tagging performance and jet substructure in a high $p_\\rm{T}$ $g\\rightarrow b\\bar{b}$ rich sample of large-$R$ jets from $pp$ collisions at $\\sqrt{s}=8$ TeV with the ATLAS detector

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note summarizes studies of $b$-tagging performance and the modelling of jet properties in high $p_{\\rm T}$, double $b$-tagged, large-$R$ jets from $\\sqrt{s} = 8 TeV$ $pp$ collisions collected by the ATLAS detector at the Large Hadron Collider. The double $b$-tag requirement yields a sample rich in jets originating from the $g\\rightarrow b\\bar{b}$ process. Using this sample, the performance of $b$-tagging at small $b$-quark angular separations is probed, and the modeling of jet properties, including substructure variables, is examined. Good agreement between data and Monte Carlo simulation is found within the experimental uncertainties.

  9. Ultrasensitive multiplex optical quantification of bacteria in large samples of biofluids

    Science.gov (United States)

    Pazos-Perez, Nicolas; Pazos, Elena; Catala, Carme; Mir-Simon, Bernat; Gómez-de Pedro, Sara; Sagales, Juan; Villanueva, Carlos; Vila, Jordi; Soriano, Alex; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.

    2016-01-01

    Efficient treatments in bacterial infections require the fast and accurate recognition of pathogens, with concentrations as low as one per milliliter in the case of septicemia. Detecting and quantifying bacteria in such low concentrations is challenging and typically demands cultures of large samples of blood (~1 milliliter) extending over 24–72 hours. This delay seriously compromises the health of patients. Here we demonstrate a fast microorganism optical detection system for the exhaustive identification and quantification of pathogens in volumes of biofluids with clinical relevance (~1 milliliter) in minutes. We drive each type of bacteria to accumulate antibody functionalized SERS-labelled silver nanoparticles. Particle aggregation on the bacteria membranes renders dense arrays of inter-particle gaps in which the Raman signal is exponentially amplified by several orders of magnitude relative to the dispersed particles. This enables a multiplex identification of the microorganisms through the molecule-specific spectral fingerprints. PMID:27364357

  10. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Science.gov (United States)

    Dufour, Nicholas; Redcay, Elizabeth; Young, Liane; Mavros, Penelope L; Moran, Joseph M; Triantafyllou, Christina; Gabrieli, John D E; Saxe, Rebecca

    2013-01-01

    Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462) individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31), using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  11. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Directory of Open Access Journals (Sweden)

    Nicholas Dufour

    Full Text Available Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462 individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31, using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  12. Presence and significant determinants of cognitive impairment in a large sample of patients with multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Martina Borghi

    Full Text Available OBJECTIVES: To investigate the presence and the nature of cognitive impairment in a large sample of patients with Multiple Sclerosis (MS, and to identify clinical and demographic determinants of cognitive impairment in MS. METHODS: 303 patients with MS and 279 healthy controls were administered the Brief Repeatable Battery of Neuropsychological tests (BRB-N; measures of pre-morbid verbal competence and neuropsychiatric measures were also administered. RESULTS: Patients and healthy controls were matched for age, gender, education and pre-morbid verbal Intelligence Quotient. Patients presenting with cognitive impairment were 108/303 (35.6%. In the overall group of participants, the significant predictors of the most sensitive BRB-N scores were: presence of MS, age, education, and Vocabulary. The significant predictors when considering MS patients only were: course of MS, age, education, vocabulary, and depression. Using logistic regression analyses, significant determinants of the presence of cognitive impairment in relapsing-remitting MS patients were: duration of illness (OR = 1.053, 95% CI = 1.010-1.097, p = 0.015, Expanded Disability Status Scale score (OR = 1.247, 95% CI = 1.024-1.517, p = 0.028, and vocabulary (OR = 0.960, 95% CI = 0.936-0.984, p = 0.001, while in the smaller group of progressive MS patients these predictors did not play a significant role in determining the cognitive outcome. CONCLUSIONS: Our results corroborate the evidence about the presence and the nature of cognitive impairment in a large sample of patients with MS. Furthermore, our findings identify significant clinical and demographic determinants of cognitive impairment in a large sample of MS patients for the first time. Implications for further research and clinical practice were discussed.

  13. Outpatient Tinnitus Clinic, Self-Help Web Platform, or Mobile Application to Recruit Tinnitus Study Samples?

    Directory of Open Access Journals (Sweden)

    Thomas Probst

    2017-04-01

    Full Text Available For understanding the heterogeneity of tinnitus, large samples are required. However, investigations on how samples recruited by different methods differ from each other are lacking. In the present study, three large samples each recruited by different means were compared: N = 5017 individuals registered at a self-help web platform for tinnitus (crowdsourcing platform Tinnitus Talk, N = 867 users of a smart mobile application for tinnitus (crowdsensing platform TrackYourTinnitus, and N = 3786 patients contacting an outpatient tinnitus clinic (Tinnitus Center of the University Hospital Regensburg. The three samples were compared regarding age, gender, and duration of tinnitus (month or years perceiving tinnitus; subjective report using chi-squared tests. The three samples significantly differed from each other in age, gender and tinnitus duration (p < 0.05. Users of the TrackYourTinnitus crowdsensing platform were younger, users of the Tinnitus Talk crowdsourcing platform had more often female gender, and users of both newer technologies (crowdsourcing and crowdsensing had more frequently acute/subacute tinnitus (<3 months and 4–6 months as well as a very long tinnitus duration (>20 years. The implications of these findings for clinical research are that newer technologies such as crowdsourcing and crowdsensing platforms offer the possibility to reach individuals hard to get in contact with at an outpatient tinnitus clinic. Depending on the aims and the inclusion/exclusion criteria of a given study, different recruiting strategies (clinic and/or newer technologies offer different advantages and disadvantages. In general, the representativeness of study results might be increased when tinnitus study samples are recruited in the clinic as well as via crowdsourcing and crowdsensing.

  14. Correction of population stratification in large multi-ethnic association studies.

    Directory of Open Access Journals (Sweden)

    David Serre

    2008-01-01

    Full Text Available The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification.We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification.Our results highlight the importance of carefully addressing population stratification and of carefully "cleaning" the sample prior to analyses to obtain stronger signals of association and to avoid spurious results.

  15. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  16. Prevalence of learned grapheme-color pairings in a large online sample of synesthetes.

    Directory of Open Access Journals (Sweden)

    Nathan Witthoft

    Full Text Available In this paper we estimate the minimum prevalence of grapheme-color synesthetes with letter-color matches learned from an external stimulus, by analyzing a large sample of English-speaking grapheme-color synesthetes. We find that at least 6% (400/6588 participants of the total sample learned many of their matches from a widely available colored letter toy. Among those born in the decade after the toy began to be manufactured, the proportion of synesthetes with learned letter-color pairings approaches 15% for some 5-year periods. Among those born 5 years or more before it was manufactured, none have colors learned from the toy. Analysis of the letter-color matching data suggests the only difference between synesthetes with matches to the toy and those without is exposure to the stimulus. These data indicate learning of letter-color pairings from external contingencies can occur in a substantial fraction of synesthetes, and are consistent with the hypothesis that grapheme-color synesthesia is a kind of conditioned mental imagery.

  17. Large magnitude gridded ionization chamber for impurity identification in alpha emitting radioactive samples

    International Nuclear Information System (INIS)

    Santos, R.N. dos.

    1992-01-01

    This paper refers to a large magnitude gridded ionization chamber with high resolution used in the identification of α radioactive samples. The chamber and the electrode have been described in terms of their geometry and dimensions, as well as the best results listed accordingly. Several α emitting radioactive samples were used with a gas mixture of 90% Argon plus 10% Methane. We got α energy spectrum with resolution around 22,14 KeV in agreement to the best results available in the literature. The spectrum of α energy related to 92 U 233 was gotten using the ionization chamber mentioned in this work; several values were found which matched perfectly well adjustment curve of the chamber. Many other additional measures using different kinds of adjusted detectors were successfully obtained in order to confirm the results gotten in the experiments, thus leading to the identification of some elements of the 92 U 233 radioactive series. Such results show the possibility of using the chamber mentioned for measurements of α low activity contamination. (author)

  18. Investigating sex differences in psychological predictors of snack intake among a large representative sample.

    Science.gov (United States)

    Adriaanse, Marieke A; Evers, Catharine; Verhoeven, Aukje A C; de Ridder, Denise T D

    2016-03-01

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of psychological eating-related variables. A community sample was employed to: (i) determine sex differences in (un)healthy snack consumption and psychological eating-related variables (e.g. emotional eating, intention to eat healthily); (ii) examine whether sex predicts energy intake from (un)healthy snacks over and above psychological variables; and (iii) investigate the relationship between psychological variables and snack intake for men and women separately. Snack consumption was assessed with a 7d snack diary; the psychological eating-related variables with questionnaires. Participants were members of an Internet survey panel that is based on a true probability sample of households in the Netherlands. Men and women (n 1292; 45 % male), with a mean age of 51·23 (sd 16·78) years and a mean BMI of 25·62 (sd 4·75) kg/m2. Results revealed that women consumed more healthy and less unhealthy snacks than men and they scored higher than men on emotional and restrained eating. Women also more often reported appearance and health-related concerns about their eating behaviour, but men and women did not differ with regard to external eating or their intentions to eat more healthily. The relationships between psychological eating-related variables and snack intake were similar for men and women, indicating that snack intake is predicted by the same variables for men and women. It is concluded that some small sex differences in psychological eating-related variables exist, but based on the present data there is no need for interventions aimed at promoting healthy eating to target different predictors according to sex.

  19. Sleep habits, insomnia, and daytime sleepiness in a large and healthy community-based sample of New Zealanders.

    Science.gov (United States)

    Wilsmore, Bradley R; Grunstein, Ronald R; Fransen, Marlene; Woodward, Mark; Norton, Robyn; Ameratunga, Shanthi

    2013-06-15

    To determine the relationship between sleep complaints, primary insomnia, excessive daytime sleepiness, and lifestyle factors in a large community-based sample. Cross-sectional study. Blood donor sites in New Zealand. 22,389 individuals aged 16-84 years volunteering to donate blood. N/A. A comprehensive self-administered questionnaire including personal demographics and validated questions assessing sleep disorders (snoring, apnea), sleep complaints (sleep quantity, sleep dissatisfaction), insomnia symptoms, excessive daytime sleepiness, mood, and lifestyle factors such as work patterns, smoking, alcohol, and illicit substance use. Additionally, direct measurements of height and weight were obtained. One in three participants report healthy sample) was associated with insomnia (odds ratio [OR] 1.75, 95% confidence interval [CI] 1.50 to 2.05), depression (OR 2.01, CI 1.74 to 2.32), and sleep disordered breathing (OR 1.92, CI 1.59 to 2.32). Long work hours, alcohol dependence, and rotating work shifts also increase the risk of daytime sleepiness. Even in this relatively young, healthy, non-clinical sample, sleep complaints and primary insomnia with subsequent excess daytime sleepiness were common. There were clear associations between many personal and lifestyle factors-such as depression, long work hours, alcohol dependence, and rotating shift work-and sleep problems or excessive daytime sleepiness.

  20. MicroRNA Expression in Laser Micro-dissected Breast Cancer Tissue Samples - a Pilot Study.

    Science.gov (United States)

    Seclaman, Edward; Narita, Diana; Anghel, Andrei; Cireap, Natalia; Ilina, Razvan; Sirbu, Ioan Ovidiu; Marian, Catalin

    2017-10-28

    Breast cancer continues to represent a significant public health burden despite outstanding research advances regarding the molecular mechanisms of cancer biology, biomarkers for diagnostics and prognostic and therapeutic management of this disease. The studies of micro RNAs in breast cancer have underlined their potential as biomarkers and therapeutic targets; however most of these studies are still done on largely heterogeneous whole breast tissue samples. In this pilot study we have investigated the expression of four micro RNAs (miR-21, 145, 155, 92) known to be involved in breast cancer, in homogenous cell populations collected by laser capture microdissection from breast tissue section slides. Micro RNA expression was assessed by real time PCR, and associations with clinical and pathological characteristics were also explored. Our results have confirmed previous associations of miR-21 expression with poor prognosis characteristics of breast cancers such as high stage, large and highly proliferative tumors. No statistically significant associations were found with the other micro RNAs investigated, possibly due to the small sample size of our study. Our results also suggest that miR-484 could be a suitable endogenous control for data normalization in breast tissues, these results needing further confirmation by future studies. In summary, our pilot study showed the feasibility of detecting micro RNAs expression in homogenous laser captured microdissected invasive breast cancer samples, and confirmed some of the previously reported associations with poor prognostic characteristics of breast tumors.

  1. Analysis of plant hormones by microemulsion electrokinetic capillary chromatography coupled with on-line large volume sample stacking.

    Science.gov (United States)

    Chen, Zongbao; Lin, Zian; Zhang, Lin; Cai, Yan; Zhang, Lan

    2012-04-07

    A novel method of microemulsion electrokinetic capillary chromatography (MEEKC) coupled with on-line large volume sample stacking was developed for the analysis of six plant hormones including indole-3-acetic acid, indole-3-butyric acid, indole-3-propionic acid, 1-naphthaleneacetic acid, abscisic acid and salicylic acid. Baseline separation of six plant hormones was achieved within 10 min by using the microemulsion background electrolyte containing a 97.2% (w/w) 10 mM borate buffer at pH 9.2, 1.0% (w/w) ethyl acetate as oil droplets, 0.6% (w/w) sodium dodecyl sulphate as surfactant and 1.2% (w/w) 1-butanol as cosurfactant. In addition, an on-line concentration method based on a large volume sample stacking technique and multiple wavelength detection was adopted for improving the detection sensitivity in order to determine trace level hormones in a real sample. The optimal method provided about 50-100 fold increase in detection sensitivity compared with a single MEEKC method, and the detection limits (S/N = 3) were between 0.005 and 0.02 μg mL(-1). The proposed method was simple, rapid and sensitive and could be applied to the determination of six plant hormones in spiked water samples, tobacco leaves and 1-naphthylacetic acid in leaf fertilizer. The recoveries ranged from 76.0% to 119.1%, and good reproducibilities were obtained with relative standard deviations (RSDs) less than 6.6%.

  2. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. A large-capacity sample-changer for automated gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An automatic sample-changer has been developed at the National Institute for Metallurgy for use in gamma-ray spectroscopy with a lithium-drifted germanium detector. The sample-changer features remote storage, which prevents cross-talk and reduces background. It has a capacity for 200 samples and a sample container that takes liquid or solid samples. The rotation and vibration of samples during counting ensure that powdered samples are compacted, and improve the precision and reproducibility of the counting geometry [af

  4. Effect of NaOH on large-volume sample stacking of haloacetic acids in capillary zone electrophoresis with a low-pH buffer.

    Science.gov (United States)

    Tu, Chuanhong; Zhu, Lingyan; Ang, Chay Hoon; Lee, Hian Kee

    2003-06-01

    Large-volume sample stacking (LVSS) is an effective on-capillary sample concentration method in capillary zone electrophoresis, which can be applied to the sample in a low-conductivity matrix. NaOH solution is commonly used to back-extract acidic compounds from organic solvent in sample pretreatment. The effect of NaOH as sample matrix on LVSS of haloacetic acids was investigated in this study. It was found that the presence of NaOH in sample did not compromise, but rather help the sample stacking performance if a low pH background electrolyte (BGE) was used. The sensitivity enhancement factor was higher than the case when sample was dissolved in pure water or diluted BGE. Compared with conventional injection (0.4% capillary volume), 97-120-fold sensitivity enhancement in terms of peak height was obtained without deterioration of separation with an injection amount equal to 20% of the capillary volume. This method was applied to determine haloacetic acids in tap water by combination with liquid-liquid extraction and back-extraction into NaOH solution. Limits of detection at sub-ppb levels were obtained for real samples with direct UV detection.

  5. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. A follow-up study of hygiene in catering premises at large-scale events in the United Kingdom.

    Science.gov (United States)

    Willis, C; Elviss, N; McLauchlin, J

    2015-01-01

    To investigate food hygiene practices at large events by assessing the microbiological quality of ready-to-eat food, drinking water, food preparation surfaces, cleaning cloths and wristbands worn by food handlers for event security purposes. Over a 7-month period, 1662 samples were collected at 153 events and examined for microbiological contamination. Eight per cent of food samples were of an unsatisfactory quality. A further one per cent contained potentially hazardous levels of human pathogenic bacteria. 27% of water samples, 32% of swabs and 56% of cloths were also unsatisfactory. These results represented an improvement in hygiene compared to a previous study carried out 12 months previously. A fifth of food handler wristbands were contaminated with Enterobacteriaceae, Escherichia coli and/or coagulase-positive staphylococci, with those bands made from fabric being more frequently contaminated than those made from plastic or other materials. This study provides evidence that the food hygiene at large-scale events may have improved. However, there is still a need for continued efforts to maintain an ongoing improvement in cleaning regimes and food hygiene management. This study was part of an ongoing focus on large events in the lead-up to the London 2012 Olympics. Lessons learnt here will be important in the planning of future large events. © 2014 Crown copyright. © 2014 Society for Applied Microbiology This article is Published with the permission of the Controller of HMSO and Queen's Printer for Scotland.

  7. LOGISTICS OF ECOLOGICAL SAMPLING ON LARGE RIVERS

    Science.gov (United States)

    The objectives of this document are to provide an overview of the logistical problems associated with the ecological sampling of boatable rivers and to suggest solutions to those problems. It is intended to be used as a resource for individuals preparing to collect biological dat...

  8. Sampling data summary for the ninth run of the Large Slurry Fed Melter

    International Nuclear Information System (INIS)

    Sabatino, D.M.

    1983-01-01

    The ninth experimental run of the Large Slurry Fed Melter (LSFM) was completed June 27, 1983, after 63 days of continuous operation. During the run, the various melter and off-gas streams were sampled and analyzed to determine melter material balances and to characterize off-gas emissions. Sampling methods and preliminary results were reported earlier. The emphasis was on the chemical analyses of the off-gas entrainment, deposits, and scrubber liquid. The significant sampling results from the run are summarized below: Flushing the Frit 165 with Frit 131 without bubbler agitation required 3 to 4.5 melter volumes. The off-gas cesium concentration during feeding was on the order of 36 to 56 μgCs/scf. The cesium concentration in the melter plenum (based on air in leakage only) was on the order of 110 to 210 μgCs/scf. Using <1 micron as the cut point for semivolatile material 60% of the chloride, 35% of the sodium and less than 5% of the managanese and iron in the entrainment are present as semivolatiles. A material balance on the scrubber tank solids shows good agreement with entrainment data. An overall cesium balance using LSFM-9 data and the DWPF production rate indicates an emission of 0.11 mCi/yr of cesium from the DWPF off-gas. This is a factor of 27 less than the maximum allowable 3 mCi/yr

  9. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  10. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  11. Genus-Specific Primers for Study of Fusarium Communities in Field Samples

    Science.gov (United States)

    Edel-Hermann, Véronique; Gautheron, Nadine; Durling, Mikael Brandström; Kolseth, Anna-Karin; Steinberg, Christian; Persson, Paula; Friberg, Hanna

    2015-01-01

    Fusarium is a large and diverse genus of fungi of great agricultural and economic importance, containing many plant pathogens and mycotoxin producers. To date, high-throughput sequencing of Fusarium communities has been limited by the lack of genus-specific primers targeting regions with high discriminatory power at the species level. In the present study, we evaluated two Fusarium-specific primer pairs targeting translation elongation factor 1 (TEF1). We also present the new primer pair Fa+7/Ra+6. Mock Fusarium communities reflecting phylogenetic diversity were used to evaluate the accuracy of the primers in reflecting the relative abundance of the species. TEF1 amplicons were subjected to 454 high-throughput sequencing to characterize Fusarium communities. Field samples from soil and wheat kernels were included to test the method on more-complex material. For kernel samples, a single PCR was sufficient, while for soil samples, nested PCR was necessary. The newly developed primer pairs Fa+7/Ra+6 and Fa/Ra accurately reflected Fusarium species composition in mock DNA communities. In field samples, 47 Fusarium operational taxonomic units were identified, with the highest Fusarium diversity in soil. The Fusarium community in soil was dominated by members of the Fusarium incarnatum-Fusarium equiseti species complex, contradicting findings in previous studies. The method was successfully applied to analyze Fusarium communities in soil and plant material and can facilitate further studies of Fusarium ecology. PMID:26519387

  12. Analysis of intraosseous samples in endotoxemic shock--an experimental study in the anaesthetised pig.

    Science.gov (United States)

    Strandberg, G; Larsson, A; Lipcsey, M; Berglund, L; Eriksson, M

    2014-03-01

    Intraosseous (IO) access is used in emergency situations to allow rapid initiation of treatment. IO access is also sometimes used for blood sampling, although data on accuracy of such sampling in critical illness are limited. There is also a potential risk that bone marrow fragments in IO samples may damage laboratory equipment. It is ethically questionable to perform a simultaneous comparison between IO and arterial/venous sampling in critically ill humans. We have, thus, studied the analytical performance of IO sampling in a porcine septic shock model using a cartridge-based analyser. Eight pigs with endotoxin-induced septic shock were sampled hourly for 6 h, and analysed for blood gases, acid base status, haemoglobin, glucose and lactate using point of care instruments. Samples were taken from three IO cannulae (tibia bilaterally, one with infusion, and humerus), one arterial and one venous. An interaction test was used to assess changes in agreement between methods over time. Bland–Altman plots were constructed to study bias between methods. There were, to a varying extent, differences between IO and arterial/venous levels for all studied variables, but agreement did not change significantly during the experiment. A general finding was a large dispersion of differences between methods. IO sample values should be treated with caution in this setting but may add useful information to the clinical picture. The tibia or humerus may be used for sampling. IO infusion decreases agreement, thus sampling during infusion should be avoided.

  13. Sampling in schools and large institutional buildings: Implications for regulations, exposure and management of lead and copper.

    Science.gov (United States)

    Doré, Evelyne; Deshommes, Elise; Andrews, Robert C; Nour, Shokoufeh; Prévost, Michèle

    2018-04-21

    Legacy lead and copper components are ubiquitous in plumbing of large buildings including schools that serve children most vulnerable to lead exposure. Lead and copper samples must be collected after varying stagnation times and interpreted in reference to different thresholds. A total of 130 outlets (fountains, bathroom and kitchen taps) were sampled for dissolved and particulate lead as well as copper. Sampling was conducted at 8 schools and 3 institutional (non-residential) buildings served by municipal water of varying corrosivity, with and without corrosion control (CC), and without a lead service line. Samples included first draw following overnight stagnation (>8h), partial (30 s) and fully (5 min) flushed, and first draw after 30 min of stagnation. Total lead concentrations in first draw samples after overnight stagnation varied widely from 0.07 to 19.9 μg Pb/L (median: 1.7 μg Pb/L) for large buildings served with non-corrosive water. Higher concentrations were observed in schools with corrosive water without CC (0.9-201 μg Pb/L, median: 14.3 μg Pb/L), while levels in schools with CC ranged from 0.2 to 45.1 μg Pb/L (median: 2.1 μg Pb/L). Partial flushing (30 s) and full flushing (5 min) reduced concentrations by 88% and 92% respectively for corrosive waters without CC. Lead concentrations were 45% than values in 1st draw samples collected after overnight stagnation. Concentrations of particulate Pb varied widely (≥0.02-846 μg Pb/L) and was found to be the cause of very high total Pb concentrations in the 2% of samples exceeding 50 μg Pb/L. Pb levels across outlets within the same building varied widely (up to 1000X) especially in corrosive water (0.85-851 μg Pb/L after 30MS) confirming the need to sample at each outlet to identify high risk taps. Based on the much higher concentrations observed in first draw samples, even after a short stagnation, the first 250mL should be discarded unless no sources

  14. Acceptance of Driverless Vehicles: Results from a Large Cross-National Questionnaire Study

    OpenAIRE

    Nordhoff, Sina; de Winter, Joost; Kyriakidis, Miltos; van Arem, Bart; Happee, Riender

    2018-01-01

    Shuttles that operate without an onboard driver are currently being developed and tested in various projects worldwide. However, there is a paucity of knowledge on the determinants of acceptance of driverless shuttles in large cross-national samples. In the present study, we surveyed 10,000 respondents on the acceptance of driverless vehicles and sociodemographic characteristics, using a 94-item online questionnaire. After data filtering, data of 7,755 respondents from 116 countries were reta...

  15. Sample-based Attribute Selective AnDE for Large Data

    DEFF Research Database (Denmark)

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Henc...

  16. The long-term association of OCD and depression and its moderators: A four-year follow up study in a large clinical sample.

    Science.gov (United States)

    Tibi, L; van Oppen, P; van Balkom, A J L M; Eikelenboom, M; Rickelt, J; Schruers, K R J; Anholt, G E

    2017-07-01

    Depression is the most common comorbidity in obsessive-compulsive disorder (OCD). However, the mechanisms of depressive comorbidity in OCD are poorly understood. We assessed the directionality and moderators of the OCD-depression association over time in a large, prospective clinical sample of OCD patients. Data were drawn from 382 OCD patients participating at the Netherlands Obsessive-Compulsive Disorder Association (NOCDA) study. Cross-lagged, structural equation modeling analyses were used to assess the temporal association between OCD and depressive symptoms. Assessments were conducted at baseline, two-year and four-year follow up. Cognitive and interpersonal moderators of the prospective association between OCD and depressive symptoms were tested. Cross-lagged analyses demonstrated that OCD predicts depressive symptoms at two-year follow up and not vice a versa. This relationship disappeared at four-year follow up. Secure attachment style moderated the prospective association between OCD and depression. Depressive comorbidity in OCD might constitute a functional consequence of the incapacitating OCD symptoms. Both OCD and depression symptoms demonstrated strong stability effects between two-year and four-year follow up, which may explain the lack of association between them in that period. Among OCD patients, secure attachment represents a buffer against future depressive symptoms. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  17. Human blood RNA stabilization in samples collected and transported for a large biobank

    Science.gov (United States)

    2012-01-01

    Background The Norwegian Mother and Child Cohort Study (MoBa) is a nation-wide population-based pregnancy cohort initiated in 1999, comprising more than 108.000 pregnancies recruited between 1999 and 2008. In this study we evaluated the feasibility of integrating RNA analyses into existing MoBa protocols. We compared two different blood RNA collection tube systems – the PAXgene™ Blood RNA system and the Tempus™ Blood RNA system - and assessed the effects of suboptimal blood volumes in collection tubes and of transportation of blood samples by standard mail. Endpoints to characterize the samples were RNA quality and yield, and the RNA transcript stability of selected genes. Findings High-quality RNA could be extracted from blood samples stabilized with both PAXgene and Tempus tubes. The RNA yields obtained from the blood samples collected in Tempus tubes were consistently higher than from PAXgene tubes. Higher RNA yields were obtained from cord blood (3 – 4 times) compared to adult blood with both types of tubes. Transportation of samples by standard mail had moderate effects on RNA quality and RNA transcript stability; the overall RNA quality of the transported samples was high. Some unexplained changes in gene expression were noted, which seemed to correlate with suboptimal blood volumes collected in the tubes. Temperature variations during transportation may also be of some importance. Conclusions Our results strongly suggest that special collection tubes are necessary for RNA stabilization and they should be used for establishing new biobanks. We also show that the 50,000 samples collected in the MoBa biobank provide RNA of high quality and in sufficient amounts to allow gene expression analyses for studying the association of disease with altered patterns of gene expression. PMID:22988904

  18. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  19. Generating samples for association studies based on HapMap data

    Directory of Open Access Journals (Sweden)

    Chen Yixuan

    2008-01-01

    Full Text Available Abstract Background With the completion of the HapMap project, a variety of computational algorithms and tools have been proposed for haplotype inference, tag SNP selection and genome-wide association studies. Simulated data are commonly used in evaluating these new developed approaches. In addition to simulations based on population models, empirical data generated by perturbing real data, has also been used because it may inherit specific properties from real data. However, there is no tool that is publicly available to generate large scale simulated variation data by taking into account knowledge from the HapMap project. Results A computer program (gs was developed to quickly generate a large number of samples based on real data that are useful for a variety of purposes, including evaluating methods for haplotype inference, tag SNP selection and association studies. Two approaches have been implemented to generate dense SNP haplotype/genotype data that share similar local linkage disequilibrium (LD patterns as those in human populations. The first approach takes haplotype pairs from samples as inputs, and the second approach takes patterns of haplotype block structures as inputs. Both quantitative and qualitative traits have been incorporated in the program. Phenotypes are generated based on a disease model, or based on the effect of a quantitative trait nucleotide, both of which can be specified by users. In addition to single-locus disease models, two-locus disease models have also been implemented that can incorporate any degree of epistasis. Users are allowed to specify all nine parameters in a 3 × 3 penetrance table. For several commonly used two-locus disease models, the program can automatically calculate penetrances based on the population prevalence and marginal effects of a disease that users can conveniently specify. Conclusion The program gs can effectively generate large scale genetic and phenotypic variation data that can be

  20. Inconsistency in large pharmacogenomic studies

    DEFF Research Database (Denmark)

    Haibe-Kains, Benjamin; El-Hachem, Nehme; Birkbak, Nicolai Juul

    2013-01-01

    Two large-scale pharmacogenomic studies were published recently in this journal. Genomic data are well correlated between studies; however, the measured drug response data are highly discordant. Although the source of inconsistencies remains uncertain, it has potential implications for using...

  1. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  2. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  3. Nutritional status and dental caries in a large sample of 4- and 5 ...

    African Journals Online (AJOL)

    Background. Evidence from studies involving small samples of children in Africa, India and South America suggests a higher dental caries rate in malnourished children. A comparison was done to evaluate wasting and stunting and their association with dental caries in four samples of South African children. Design.

  4. On the aspiration characteristics of large-diameter, thin-walled aerosol sampling probes at yaw orientations with respect to the wind

    International Nuclear Information System (INIS)

    Vincent, J.H.; Mark, D.; Smith, T.A.; Stevens, D.C.; Marshall, M.

    1986-01-01

    Experiments were carried out in a large wind tunnel to investigate the aspiration efficiencies of thin-walled aerosol sampling probes of large diameter (up to 50 mm) at orientations with respect to the wind direction ranging from 0 to 180 degrees. Sampling conditions ranged from sub-to super-isokinetic. The experiments employed test dusts of close-graded fused alumina and were conducted under conditions of controlled freestream turbulence. For orientations up to and including 90 degrees, the results were qualitatively and quantitatively consistent with a new physical model which takes account of the fact that the sampled air not only diverges or converges (depending on the relationship between wind speed and sampling velocity) but also turns to pass through the plane of the sampling orifice. The previously published results of Durham and Lundgren (1980) and Davies and Subari (1982) for smaller probes were also in good agreement with the new model. The model breaks down, however, for orientations greater than 90 degrees due to the increasing effect of particle impaction onto the blunt leading edge of the probe body. For the probe facing directly away from the wind (180 degree orientation), aspiration efficiency is dominated almost entirely by this effect. (author)

  5. Influence of population versus convenience sampling on sample characteristics in studies of cognitive aging.

    Science.gov (United States)

    Brodaty, Henry; Mothakunnel, Annu; de Vel-Palumbo, Melissa; Ames, David; Ellis, Kathryn A; Reppermund, Simone; Kochan, Nicole A; Savage, Greg; Trollor, Julian N; Crawford, John; Sachdev, Perminder S

    2014-01-01

    We examined whether differences in findings of studies examining mild cognitive impairment (MCI) were associated with recruitment methods by comparing sample characteristics in two contemporaneous Australian studies, using population-based and convenience sampling. The Sydney Memory and Aging Study invited participants randomly from the electoral roll in defined geographic areas in Sydney. The Australian Imaging, Biomarkers and Lifestyle Study of Ageing recruited cognitively normal (CN) individuals via media appeals and MCI participants via referrals from clinicians in Melbourne and Perth. Demographic and cognitive variables were harmonized, and similar diagnostic criteria were applied to both samples retrospectively. CN participants recruited via convenience sampling were younger, better educated, more likely to be married and have a family history of dementia, and performed better cognitively than those recruited via population-based sampling. MCI participants recruited via population-based sampling had better memory performance and were less likely to carry the apolipoprotein E ε4 allele than clinically referred participants but did not differ on other demographic variables. A convenience sample of normal controls is likely to be younger and better functioning and that of an MCI group likely to perform worse than a purportedly random sample. Sampling bias should be considered when interpreting findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Beyond the Usability Lab Conducting Large-scale Online User Experience Studies

    CERN Document Server

    Albert, William; Tullis, Thomas

    2010-01-01

    Usability testing and user experience research typically take place in a controlled lab with small groups. While this type of testing is essential to user experience design, more companies are also looking to test large sample sizes to be able compare data according to specific user populations and see how their experiences differ across user groups. But few usability professionals have experience in setting up these studies, analyzing the data, and presenting it in effective ways.  Online usability testing offers the solution by allowing testers to elicit feedback simultaneously from 1,0

  7. Highlights of a recycling behaviour study in South Africa’s large urban areas

    CSIR Research Space (South Africa)

    Strydom, WF

    2012-10-01

    Full Text Available behaviour study in South Africa?s large urban areas WF STRYDOM CSIR Natural Resources and the Environment, PO Box 395, Pretoria 0001 Email: wstrydom@csir.co.za ? www.csir.co.za INTRODUCTION The recently promulgated National Environmental Management... representative sample of 2 004 respondents in 11 of the larger South African urban areas, including all the metropolitan municipalities. The survey was conducted in November 2010, before the Waste Act was widely implemented. The objective of the study was...

  8. Examining the interrater reliability of the Hare Psychopathy Checklist-Revised across a large sample of trained raters.

    Science.gov (United States)

    Blais, Julie; Forth, Adelle E; Hare, Robert D

    2017-06-01

    The goal of the current study was to assess the interrater reliability of the Psychopathy Checklist-Revised (PCL-R) among a large sample of trained raters (N = 280). All raters completed PCL-R training at some point between 1989 and 2012 and subsequently provided complete coding for the same 6 practice cases. Overall, 3 major conclusions can be drawn from the results: (a) reliability of individual PCL-R items largely fell below any appropriate standards while the estimates for Total PCL-R scores and factor scores were good (but not excellent); (b) the cases representing individuals with high psychopathy scores showed better reliability than did the cases of individuals in the moderate to low PCL-R score range; and (c) there was a high degree of variability among raters; however, rater specific differences had no consistent effect on scoring the PCL-R. Therefore, despite low reliability estimates for individual items, Total scores and factor scores can be reliably scored among trained raters. We temper these conclusions by noting that scoring standardized videotaped case studies does not allow the rater to interact directly with the offender. Real-world PCL-R assessments typically involve a face-to-face interview and much more extensive collateral information. We offer recommendations for new web-based training procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Treatability studies on different refinery wastewater samples using high-throughput microbial electrolysis cells (MECs)

    KAUST Repository

    Ren, Lijiao; Siegert, Michael; Ivanov, Ivan; Pisciotta, John M.; Logan, Bruce E.

    2013-01-01

    High-throughput microbial electrolysis cells (MECs) were used to perform treatability studies on many different refinery wastewater samples all having appreciably different characteristics, which resulted in large differences in current generation. A de-oiled refinery wastewater sample from one site (DOW1) produced the best results, with 2.1±0.2A/m2 (maximum current density), 79% chemical oxygen demand removal, and 82% headspace biological oxygen demand removal. These results were similar to those obtained using domestic wastewater. Two other de-oiled refinery wastewater samples also showed good performance, with a de-oiled oily sewer sample producing less current. A stabilization lagoon sample and a stripped sour wastewater sample failed to produce appreciable current. Electricity production, organics removal, and startup time were improved when the anode was first acclimated to domestic wastewater. These results show mini-MECs are an effective method for evaluating treatability of different wastewaters. © 2013 Elsevier Ltd.

  10. Treatability studies on different refinery wastewater samples using high-throughput microbial electrolysis cells (MECs)

    KAUST Repository

    Ren, Lijiao

    2013-05-01

    High-throughput microbial electrolysis cells (MECs) were used to perform treatability studies on many different refinery wastewater samples all having appreciably different characteristics, which resulted in large differences in current generation. A de-oiled refinery wastewater sample from one site (DOW1) produced the best results, with 2.1±0.2A/m2 (maximum current density), 79% chemical oxygen demand removal, and 82% headspace biological oxygen demand removal. These results were similar to those obtained using domestic wastewater. Two other de-oiled refinery wastewater samples also showed good performance, with a de-oiled oily sewer sample producing less current. A stabilization lagoon sample and a stripped sour wastewater sample failed to produce appreciable current. Electricity production, organics removal, and startup time were improved when the anode was first acclimated to domestic wastewater. These results show mini-MECs are an effective method for evaluating treatability of different wastewaters. © 2013 Elsevier Ltd.

  11. Sampling design in large-scale vegetation studies: Do not sacrifice ecological thinking to statistical purism!

    Czech Academy of Sciences Publication Activity Database

    Roleček, J.; Chytrý, M.; Hájek, Michal; Lvončík, S.; Tichý, L.

    2007-01-01

    Roč. 42, - (2007), s. 199-208 ISSN 1211-9520 R&D Projects: GA AV ČR IAA6163303; GA ČR(CZ) GA206/05/0020 Grant - others:GA AV ČR(CZ) KJB601630504 Institutional research plan: CEZ:AV0Z60050516 Keywords : Ecological methodology * Large-scale vegetation patterns * Macroecology Subject RIV: EF - Botanics Impact factor: 1.133, year: 2007

  12. Genesis Solar Wind Collector Cleaning Assessment: Update on 60336 Sample Case Study

    Science.gov (United States)

    Goreva, Y. S.; Allums, K. K.; Gonzalez, C. P.; Jurewicz, A. J.; Burnett, D. S.; Allton, J. H.; Kuhlman, K. R.; Woolum, D.

    2015-01-01

    To maximize the scientific return of Genesis Solar Wind return mission it is necessary to characterize and remove a crash-derived particle and thin film surface contamination. A small subset of Genesis mission collector fragments are being subjected to extensive study via various techniques. Here we present an update on the sample 60336, a Czochralski silicon (Si-CZ) based wafer from the bulk array (B/C). This sample has undergone multiple cleaning steps (see the table below): UPW spin wash, aggressive chemical cleanings (including aqua regia, hot xylene and RCA1), as well as optical and chemical (EDS, ToF-SIMS) imaging. Contamination appeared on the surface of 60336 after the initial 2007 UPW cleaning. Aqua regia and hot xylene treatment (8/13/2013) did little to remove contaminants. The sample was UPW cleaned for the third time and imaged (9/16/13). The UPW removed the dark stains that were visible on the sample. However, some features, like "the Flounder" (a large, 100 micron feature in Fig. 1b) appeared largely intact, resisting all previous cleaning efforts. These features were likely from mobilized adhesive, derived from the Post-It notes used to stabilize samples for transport from Utah after the hard landing. To remove this contamination, an RCA step 1 organic cleaning (RCA1) was employed. Although we are still uncertain on the nature of the Flounder and why it is resistant to UPW and aqua regia/hot xylene treatment, we have found RCA1 to be suitable for its removal. It is likely that the glue from sticky pads used during collector recovery may have been a source for resistant organic contamination [9]; however [8] shows that UPW reaction with crash-derived organic contamination does not make particle removal more difficult.

  13. Thinking about dying and trying and intending to die: results on suicidal behavior from a large Web-based sample.

    Science.gov (United States)

    de Araújo, Rafael M F; Mazzochi, Leonardo; Lara, Diogo R; Ottoni, Gustavo L

    2015-03-01

    Suicide is an important worldwide public health problem. The aim of this study was to characterize risk factors of suicidal behavior using a large Web-based sample. The data were collected by the Brazilian Internet Study on Temperament and Psychopathology (BRAINSTEP) from November 2010 to July 2011. Suicidal behavior was assessed by an instrument based on the Suicidal Behaviors Questionnaire. The final sample consisted of 48,569 volunteers (25.9% men) with a mean ± SD age of 30.7 ± 10.1 years. More than 60% of the sample reported having had at least a passing thought of killing themselves, and 6.8% of subjects had previously attempted suicide (64% unplanned). The demographic features with the highest risk of attempting suicide were female gender (OR = 1.82, 95% CI = 1.65 to 2.00); elementary school as highest education level completed (OR = 2.84, 95% CI = 2.48 to 3.25); being unable to work (OR = 5.32, 95% CI = 4.15 to 6.81); having no religion (OR = 2.08, 95% CI = 1.90 to 2.29); and, only for female participants, being married (OR = 1.19, 95% CI = 1.08 to 1.32) or divorced (OR = 1.66, 95% CI = 1.41 to 1.96). A family history of a suicide attempt and of a completed suicide showed the same increment in the risk of suicidal behavior. The higher the number of suicide attempts, the higher was the real intention to die (P < .05). Those who really wanted to die reported more emptiness/loneliness (OR = 1.58, 95% CI = 1.35 to 1.85), disconnection (OR = 1.54, 95% CI = 1.30 to 1.81), and hopelessness (OR = 1.74, 95% CI = 1.49 to 2.03), but their methods were not different from the methods of those who did not mean to die. This large Web survey confirmed results from previous studies on suicidal behavior and pointed out the relevance of the number of previous suicide attempts and of a positive family history, even for a noncompleted suicide, as important risk factors. © Copyright 2015 Physicians Postgraduate Press, Inc.

  14. A large study reveals no Association between APOE and Parkinson’s disease

    Science.gov (United States)

    Federoff, Monica; Jimenez-Rolando, Belen; Nalls, Michael A; Singleton, Andrew B

    2012-01-01

    Background Research focusing on the role of APOE in Parkinson’s disease (PD) has been largely inconclusive, creating a broad discrepancy in association studies. Objective To elucidate the role of APOE alleles in PD risk by studying a large sample size and controlling for population substructure. Patients and Methods In total, 3465 case and control samples were genotyped, obtained from the NINDS Neurogenetics repository. Results No significant differences in ε4 dosages exist between PD cases and controls. The frequency of ε4 carriers differed slightly between cases and controls at 24% (580/2412) and 26% (270/1053), respectively. Likewise, mean dosages of APOE ε2 were not significantly different between cases and controls. APOE ε2 carriers were observed at a frequency of 13.6% (329/2412) among cases and 15% (158/1053) among controls. Logistic regression models evaluating PD as possibly associated with ε4 or ε2 carrier status and allele dosages yielded no significant results. The mean MMSE score among all PD cases was 28.35 (SD = 2.58) and memory loss was reported in only 11.9% (105/879) of cases. Linear regression models comparing MMSE scores as predicted by ε4 or ε2 carrier status and allele dosages were not significant. Conclusions There is no association between APOE epsilon alleles and Parkinson’s disease. PMID:22349451

  15. EFFECTS OF LONG-TERM ALENDRONATE TREATMENT ON A LARGE SAMPLE OF PEDIATRIC PATIENTS WITH OSTEOGENESIS IMPERFECTA.

    Science.gov (United States)

    Lv, Fang; Liu, Yi; Xu, Xiaojie; Wang, Jianyi; Ma, Doudou; Jiang, Yan; Wang, Ou; Xia, Weibo; Xing, Xiaoping; Yu, Wei; Li, Mei

    2016-12-01

    Osteogenesis imperfecta (OI) is a group of inherited diseases characterized by reduced bone mass, recurrent bone fractures, and progressive bone deformities. Here, we evaluate the efficacy and safety of long-term treatment with alendronate in a large sample of Chinese children and adolescents with OI. In this prospective study, a total of 91 children and adolescents with OI were included. The patients received 3 years' treatment with 70 mg alendronate weekly and 500 mg calcium daily. During the treatment, fracture incidence, bone mineral density (BMD), and serum levels of the bone turnover biomarkers (alkaline phosphatase [ALP] and cross-linked C-telopeptide of type I collagen [β-CTX]) were evaluated. Linear growth speed and parameters of safety were also measured. After 3 years of treatment, the mean annual fracture incidence decreased from 1.2 ± 0.8 to 0.2 ± 0.3 (Posteogenesis imperfecta PTH = parathyroid hormone.

  16. CHRONICITY OF DEPRESSION AND MOLECULAR MARKERS IN A LARGE SAMPLE OF HAN CHINESE WOMEN.

    Science.gov (United States)

    Edwards, Alexis C; Aggen, Steven H; Cai, Na; Bigdeli, Tim B; Peterson, Roseann E; Docherty, Anna R; Webb, Bradley T; Bacanu, Silviu-Alin; Flint, Jonathan; Kendler, Kenneth S

    2016-04-25

    Major depressive disorder (MDD) has been associated with changes in mean telomere length and mitochondrial DNA (mtDNA) copy number. This study investigates if clinical features of MDD differentially impact these molecular markers. Data from a large, clinically ascertained sample of Han Chinese women with recurrent MDD were used to examine whether symptom presentation, severity, and comorbidity were related to salivary telomere length and/or mtDNA copy number (maximum N = 5,284 for both molecular and phenotypic data). Structural equation modeling revealed that duration of longest episode was positively associated with mtDNA copy number, while earlier age of onset of most severe episode and a history of dysthymia were associated with shorter telomeres. Other factors, such as symptom presentation, family history of depression, and other comorbid internalizing disorders, were not associated with these molecular markers. Chronicity of depressive symptoms is related to more pronounced telomere shortening and increased mtDNA copy number among individuals with a history of recurrent MDD. As these molecular markers have previously been implicated in physiological aging and morbidity, individuals who experience prolonged depressive symptoms are potentially at greater risk of adverse medical outcomes. © 2016 Wiley Periodicals, Inc.

  17. Treatability study sample exemption: update

    International Nuclear Information System (INIS)

    1997-01-01

    This document is a RCRA Information Brief intended to update the information in the 1991 Small-Scale Treatability Study Information Brief, and to address questions about the waste and treatability study sample exemptions that have arisen since References 3 and 5 were published

  18. Solid-Phase Extraction and Large-Volume Sample Stacking-Capillary Electrophoresis for Determination of Tetracycline Residues in Milk

    Directory of Open Access Journals (Sweden)

    Gabriela Islas

    2018-01-01

    Full Text Available Solid-phase extraction in combination with large-volume sample stacking-capillary electrophoresis (SPE-LVSS-CE was applied to measure chlortetracycline, doxycycline, oxytetracycline, and tetracycline in milk samples. Under optimal conditions, the proposed method had a linear range of 29 to 200 µg·L−1, with limits of detection ranging from 18.6 to 23.8 µg·L−1 with inter- and intraday repeatabilities < 10% (as a relative standard deviation in all cases. The enrichment factors obtained were from 50.33 to 70.85 for all the TCs compared with a conventional capillary zone electrophoresis (CZE. This method is adequate to analyze tetracyclines below the most restrictive established maximum residue limits. The proposed method was employed in the analysis of 15 milk samples from different brands. Two of the tested samples were positive for the presence of oxytetracycline with concentrations of 95 and 126 µg·L−1. SPE-LVSS-CE is a robust, easy, and efficient strategy for online preconcentration of tetracycline residues in complex matrices.

  19. Transitions in pregnancy planning in women recruited for a large prospective cohort study.

    Science.gov (United States)

    Luderer, U; Li, T; Fine, J P; Hamman, R F; Stanford, J B; Baker, D

    2017-06-01

    Do the rates at which women transition among different intensities of pregnancy planning vary with age, marital status and race/ethnicity? Rates of transition from low or moderate pregnancy probability groups (PPGs) to higher PPGs vary by age, marital status and race/ethnicity. The design of prospective studies of the effects of pre- and peri-conception exposures on fecundity, pregnancy and children's health is challenging because at any specific time only a small percentage of reproductive age women is attempting to conceive. To our knowledge, there has been no population-based, prospective study that repeatedly assessed pregnancy planning, which included women who were not already planning pregnancy at enrollment and whose ages spanned the female reproductive age range. A longitudinal study was carried out that repeatedly assessed pregnancy probability in 12 916 women for up to 21 months from January 2009 to September 2010. We analyzed data from the National Children's Study Vanguard Study, a pilot study for a large-scale epidemiological birth cohort study of children and their parents. During the Vanguard Study, investigators followed population-based samples of reproductive age women in each of seven geographically dispersed and diverse study locations over time to identify when they sought to become pregnant, providing a unique opportunity to prospectively assess changes in pregnancy planning in a large sample of US women. At study entry and each follow-up contact, which occurred at 1, 3 or 6 month intervals depending on PPG, a questionnaire was used to assess behavior dimensions of pregnancy planning to assign women to low, moderate, high non-tryer and high tryer PPGs. Crude rates of pregnancy increased with higher assigned PPG, validating the utility of the instrument. The initial PPG and probabilities of transitioning from low or moderate PPG to higher PPG or pregnancy varied with age, marital status and race/ethnicity. Women aged 25 to planning behavior in

  20. [Saarland Growth Study: sampling design].

    Science.gov (United States)

    Danker-Hopfe, H; Zabransky, S

    2000-01-01

    The use of reference data to evaluate the physical development of children and adolescents is part of the daily routine in the paediatric ambulance. The construction of such reference data is based on the collection of extensive reference data. There are different kinds of reference data: cross sectional references, which are based on data collected from a big representative cross-sectional sample of the population, longitudinal references, which are based on follow-up surveys of usually smaller samples of individuals from birth to maturity, and mixed longitudinal references, which are a combination of longitudinal and cross-sectional reference data. The advantages and disadvantages of the different methods of data collection and the resulting reference data are discussed. The Saarland Growth Study was conducted for several reasons: growth processes are subject to secular changes, there are no specific reference data for children and adolescents from this part of the country and the growth charts in use in the paediatric praxis are possibly not appropriate any more. Therefore, the Saarland Growth Study served two purposes a) to create actual regional reference data and b) to create a database for future studies on secular trends in growth processes of children and adolescents from Saarland. The present contribution focusses on general remarks on the sampling design of (cross-sectional) growth surveys and its inferences for the design of the present study.

  1. Charged-particle track analysis, thermoluminescence and microcratering studies of lunar samples

    International Nuclear Information System (INIS)

    Durrani, S.A.

    1977-01-01

    Studies of lunar samples (from both Apollo and Luna missions) have been carried out, using track analysis and thermoluminescence (t.l.) techniques, with a view to shedding light on the radiation and temperature histories of the Moon. In addition, microcraters in lunar glasses have been studied in order to elucidate the cosmic-dust impact history of the lunar regolith. In tracks studies, the topics discussed include the stabilizing effect of the thermal annealing of fossil tracks due to the lunar temperature cycle; the 'radiation annealing' of fresh heavy-ion tracks by large doses of protons (to simulate the effect of lunar radiation-damage on track registration); and correction factors for the anisotropic etching of crystals which are required in reconstructing the exposure history of lunar grains. An abundance ratio of ca. (1.1 + 0.3) x 10 -3 has been obtained, by the differential annealing technique, for the nuclei beyond the iron group to those within that group in the cosmic rays incident on the Moon. The natural t.l. of lunar samples has been used to estimate their effective storage temperature and mean depth below the surface. The results of the study of natural and artificially produced microcraters have been studied. (author)

  2. High-throughput genotyping assay for the large-scale genetic characterization of Cryptosporidium parasites from human and bovine samples.

    Science.gov (United States)

    Abal-Fabeiro, J L; Maside, X; Llovo, J; Bello, X; Torres, M; Treviño, M; Moldes, L; Muñoz, A; Carracedo, A; Bartolomé, C

    2014-04-01

    The epidemiological study of human cryptosporidiosis requires the characterization of species and subtypes involved in human disease in large sample collections. Molecular genotyping is costly and time-consuming, making the implementation of low-cost, highly efficient technologies increasingly necessary. Here, we designed a protocol based on MALDI-TOF mass spectrometry for the high-throughput genotyping of a panel of 55 single nucleotide variants (SNVs) selected as markers for the identification of common gp60 subtypes of four Cryptosporidium species that infect humans. The method was applied to a panel of 608 human and 63 bovine isolates and the results were compared with control samples typed by Sanger sequencing. The method allowed the identification of species in 610 specimens (90·9%) and gp60 subtype in 605 (90·2%). It displayed excellent performance, with sensitivity and specificity values of 87·3 and 98·0%, respectively. Up to nine genotypes from four different Cryptosporidium species (C. hominis, C. parvum, C. meleagridis and C. felis) were detected in humans; the most common ones were C. hominis subtype Ib, and C. parvum IIa (61·3 and 28·3%, respectively). 96·5% of the bovine samples were typed as IIa. The method performs as well as the widely used Sanger sequencing and is more cost-effective and less time consuming.

  3. Assessing the validity of single-item life satisfaction measures: results from three large samples.

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E

    2014-12-01

    The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS)-a more psychometrically established measure. Two large samples from Washington (N = 13,064) and Oregon (N = 2,277) recruited by the Behavioral Risk Factor Surveillance System and a representative German sample (N = 1,312) recruited by the Germany Socio-Economic Panel were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62-0.64; disattenuated r = 0.78-0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001-0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS was very small (average absolute difference = 0.015-0.042). Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use.

  4. A large-scale study of the random variability of a coding sequence: a study on the CFTR gene.

    Science.gov (United States)

    Modiano, Guido; Bombieri, Cristina; Ciminelli, Bianca Maria; Belpinati, Francesca; Giorgi, Silvia; Georges, Marie des; Scotet, Virginie; Pompei, Fiorenza; Ciccacci, Cinzia; Guittard, Caroline; Audrézet, Marie Pierre; Begnini, Angela; Toepfer, Michael; Macek, Milan; Ferec, Claude; Claustres, Mireille; Pignatti, Pier Franco

    2005-02-01

    Coding single nucleotide substitutions (cSNSs) have been studied on hundreds of genes using small samples (n(g) approximately 100-150 genes). In the present investigation, a large random European population sample (average n(g) approximately 1500) was studied for a single gene, the CFTR (Cystic Fibrosis Transmembrane conductance Regulator). The nonsynonymous (NS) substitutions exhibited, in accordance with previous reports, a mean probability of being polymorphic (q > 0.005), much lower than that of the synonymous (S) substitutions, but they showed a similar rate of subpolymorphic (q < 0.005) variability. This indicates that, in autosomal genes that may have harmful recessive alleles (nonduplicated genes with important functions), genetic drift overwhelms selection in the subpolymorphic range of variability, making disadvantageous alleles behave as neutral. These results imply that the majority of the subpolymorphic nonsynonymous alleles of these genes are selectively negative or even pathogenic.

  5. Transport studies in p-type double quantum well samples

    International Nuclear Information System (INIS)

    Hyndman, R.J.

    2000-01-01

    The motivation for the study of double quantum well samples is that the extra spatial degree of freedom can modify the ground state energies of the system, leading to new and interesting many body effects. Electron bi-layers have been widely studied but the work presented here is the first systematic study of transport properties of a p-type, double quantum well system. The samples, grown on the 311 plane, consisted of two 100A GaAs wells separated by a 30A AlAs barrier. The thin barrier in our structures, gives rise to very strong inter-layer Coulombic interactions but in contrast to electron double quantum well samples, tunnelling between the two wells is very weak. This is due to the large effective mass of holes compared with electrons. It is possible to accurately control the total density of a sample and the relative occupancy of each well using front and back gates. A systematic study of the magnetoresistance properties of the p-type bi-layers, was carried out at low temperatures and in high magnetic fields, for samples covering a range of densities. Considerable care was required to obtain reliable results as the samples were extremely susceptible to electrical shock and were prone to drift in density slowly over time. With balanced wells, the very low tunnelling in the p-type bi-layer leads to a complete absence of all odd integers in both resistance and thermopower except for the v=1 state, ( v 1/2 in each layer) where v is the total Landau level filling factor. Unlike other FQHE features the v=1 state strengthens with increased density as inter-layer interactions increase in strength over intra-layer interactions. The state is also destroyed at a critical temperature, which is much lower than the measured activation temperature. This is taken as evidence for a finite temperature phase transition predicted for the bi-layer v=1. From the experimental observations, we construct a phase diagram for the state, which agree closely with theoretical predictions

  6. Evaluation of Sampling Methods for Bacillus Spore ...

    Science.gov (United States)

    Journal Article Following a wide area release of biological materials, mapping the extent of contamination is essential for orderly response and decontamination operations. HVAC filters process large volumes of air and therefore collect highly representative particulate samples in buildings. HVAC filter extraction may have great utility in rapidly estimating the extent of building contamination following a large-scale incident. However, until now, no studies have been conducted comparing the two most appropriate sampling approaches for HVAC filter materials: direct extraction and vacuum-based sampling.

  7. A simulative comparison of respondent driven sampling with incentivized snowball sampling--the "strudel effect".

    Science.gov (United States)

    Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2014-02-01

    Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Scanning tunneling spectroscopy under large current flow through the sample.

    Science.gov (United States)

    Maldonado, A; Guillamón, I; Suderow, H; Vieira, S

    2011-07-01

    We describe a method to make scanning tunneling microscopy/spectroscopy imaging at very low temperatures while driving a constant electric current up to some tens of mA through the sample. It gives a new local probe, which we term current driven scanning tunneling microscopy/spectroscopy. We show spectroscopic and topographic measurements under the application of a current in superconducting Al and NbSe(2) at 100 mK. Perspective of applications of this local imaging method includes local vortex motion experiments, and Doppler shift local density of states studies.

  9. Characterizing the zenithal night sky brightness in large territories: how many samples per square kilometre are needed?

    Science.gov (United States)

    Bará, Salvador

    2018-01-01

    A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.

  10. Biological Sampling Variability Study

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hutchison, Janine R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-11-08

    There are many sources of variability that exist in the sample collection and analysis process. This paper addresses many, but not all, sources of variability. The main focus of this paper was to better understand and estimate variability due to differences between samplers. Variability between days was also studied, as well as random variability within each sampler. Experiments were performed using multiple surface materials (ceramic and stainless steel), multiple contaminant concentrations (10 spores and 100 spores), and with and without the presence of interfering material. All testing was done with sponge sticks using 10-inch by 10-inch coupons. Bacillus atrophaeus was used as the BA surrogate. Spores were deposited using wet deposition. Grime was coated on the coupons which were planned to include the interfering material (Section 3.3). Samples were prepared and analyzed at PNNL using CDC protocol (Section 3.4) and then cultured and counted. Five samplers were trained so that samples were taken using the same protocol. Each sampler randomly sampled eight coupons each day, four coupons with 10 spores deposited and four coupons with 100 spores deposited. Each day consisted of one material being tested. The clean samples (no interfering materials) were run first, followed by the dirty samples (coated with interfering material). There was a significant difference in recovery efficiency between the coupons with 10 spores deposited (mean of 48.9%) and those with 100 spores deposited (mean of 59.8%). There was no general significant difference between the clean and dirty (containing interfering material) coupons or between the two surface materials; however, there was a significant interaction between concentration amount and presence of interfering material. The recovery efficiency was close to the same for coupons with 10 spores deposited, but for the coupons with 100 spores deposited, the recovery efficiency for the dirty samples was significantly larger (65

  11. Methodology for Quantitative Analysis of Large Liquid Samples with Prompt Gamma Neutron Activation Analysis using Am-Be Source

    International Nuclear Information System (INIS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.

    2009-01-01

    An optimized set-up for prompt gamma neutron activation analysis (PGNAA) with Am-Be source is described and used for large liquid samples analysis. A methodology for quantitative analysis is proposed: it consists on normalizing the prompt gamma count rates with thermal neutron flux measurements carried out with He-3 detector and gamma attenuation factors calculated using MCNP-5. The relative and absolute methods are considered. This methodology is then applied to the determination of cadmium in industrial phosphoric acid. The same sample is then analyzed by inductively coupled plasma (ICP) method. Our results are in good agreement with those obtained with ICP method.

  12. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  13. Reliability of different sampling densities for estimating and mapping lichen diversity in biomonitoring studies

    International Nuclear Information System (INIS)

    Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.

    2004-01-01

    Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design

  14. Monolith Chromatography as Sample Preparation Step in Virome Studies of Water Samples.

    Science.gov (United States)

    Gutiérrez-Aguirre, Ion; Kutnjak, Denis; Rački, Nejc; Rupar, Matevž; Ravnikar, Maja

    2018-01-01

    Viruses exist in aquatic media and many of them use this media as transmission route. Next-generation sequencing (NGS) technologies have opened new doors in virus research, allowing also to reveal a hidden diversity of viral species in aquatic environments. Not surprisingly, many of the newly discovered viruses are found in environmental fresh and marine waters. One of the problems in virome research can be the low amount of viral nucleic acids present in the sample in contrast to the background ones (host, eukaryotic, prokaryotic, environmental). Therefore, virus enrichment prior to NGS is necessary in many cases. In water samples, an added problem resides in the low concentration of viruses typically present in aquatic media. Different concentration strategies have been used to overcome such limitations. CIM monoliths are a new generation of chromatographic supports that due to their particular structural characteristics are very efficient in concentration and purification of viruses. In this chapter, we describe the use of CIM monolithic chromatography for sample preparation step in NGS studies targeting viruses in fresh or marine water. The step-by-step protocol will include a case study where CIM concentration was used to study the virome of a wastewater sample using NGS.

  15. A large-scale study of epilepsy in Ecuador: methodological aspects.

    Science.gov (United States)

    Placencia, M; Suarez, J; Crespo, F; Sander, J W; Shorvon, S D; Ellison, R H; Cascante, S M

    1992-01-01

    The methodology is presented of a large-scale study of epilepsy carried out in a highland area in northern Ecuador, South America, covering a population of 72,121 people; The study was carried out in two phases, the first, a cross-sectional phase, consisted of a house-to-house survey of all persons in this population, screening for epileptic seizures using a specially designed questionnaire. Possible cases identified in screening were assessed in a cascade diagnostic procedure applied by general doctors and neurologists. Its objectives were: to establish a comprehensive epidemiological profile of epileptic seizures; to describe the clinical phenomenology of this condition in the community; to validate methods for diagnosis and classification of epileptic seizures by a non-specialised team; and to ascertain the community's knowledge, attitudes and practices regarding epilepsy. A sample was selected in this phase in order to study the social aspects of epilepsy in this community. The second phase, which was longitudinal, assessed the ability of non-specialist care in the treatment of epilepsy. It consisted of a prospective clinical trial of antiepileptic therapy in untreated patients using two standard anti-epileptic drugs. Patients were followed for 12 months by a multidisciplinary team consisting of a primary health worker, rural doctor, neurologist, anthropologist, and psychologist. Standardised, reproducible instruments and methods were used. This study was carried out through co-operation between the medical profession, political agencies and the pharmaceutical industry, at an international level. We consider this a model for further large-scale studies of this type.

  16. HPV self-sampling in Japanese women: A feasibility study in a population with limited experience of tampon use.

    Science.gov (United States)

    Hanley, Sharon Jb; Fujita, Hiromasa; Yokoyama, Susumu; Kunisawa, Shiori; Tamakoshi, Akiko; Dong, Peixin; Kobayashi, Noriko; Watari, Hidemichi; Kudo, Masataka; Sakuragi, Noriaki

    2016-09-01

    Cervical cancer incidence and mortality is increasing in Japanese women under age 50. Screening uptake is low and proactive recommendations for human papillomavirus vaccination have been suspended. Other cervical cancer prevention initiatives are urgently needed. We assessed whether human papillomavirus self-sampling might be an acceptable alternative to physician-led screening, particularly in women with limited experience of tampon use. We also sought to identify any practical, logistical, or safety issues in women already attending for screening, before carrying out further large-scale studies in non-responders. In total, 203 women aged 20-49 attending their annual workplace healthcheck in Sapporo, northern Japan, performed unsupervised human papillomavirus self-sampling before undergoing a physician-led cervical smear and human papillomavirus test, and completing a measure of acceptability for both tests. Ninety per cent of participants stated they would use self-sampling again. They found instructions easy to follow and reported no issues with the usability of the self-sampling device. Compared with physician-led testing, women found self-sampling significantly less painful, less embarrassing and could relax more (p Women lacked confidence the test had been performed correctly, despite no unsatisfactory samples. No safety issues were reported. Self-sampling was highly acceptable in this population of women. They could perform the test safely unsupervised, but lacked confidence the test has been carried out correctly. Japanese women need to be educated about the accuracy of human papillomavirus self-sampling and further large-scale studies are necessary in non-responders. © The Author(s) 2016.

  17. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  18. Gasoline prices, gasoline consumption, and new-vehicle fuel economy: Evidence for a large sample of countries

    International Nuclear Information System (INIS)

    Burke, Paul J.; Nishitateno, Shuhei

    2013-01-01

    Countries differ considerably in terms of the price drivers pay for gasoline. This paper uses data for 132 countries for the period 1995–2008 to investigate the implications of these differences for the consumption of gasoline for road transport. To address the potential for simultaneity bias, we use both a country's oil reserves and the international crude oil price as instruments for a country's average gasoline pump price. We obtain estimates of the long-run price elasticity of gasoline demand of between − 0.2 and − 0.5. Using newly available data for a sub-sample of 43 countries, we also find that higher gasoline prices induce consumers to substitute to vehicles that are more fuel-efficient, with an estimated elasticity of + 0.2. Despite the small size of our elasticity estimates, there is considerable scope for low-price countries to achieve gasoline savings and vehicle fuel economy improvements via reducing gasoline subsidies and/or increasing gasoline taxes. - Highlights: ► We estimate the determinants of gasoline demand and new-vehicle fuel economy. ► Estimates are for a large sample of countries for the period 1995–2008. ► We instrument for gasoline prices using oil reserves and the world crude oil price. ► Gasoline demand and fuel economy are inelastic with respect to the gasoline price. ► Large energy efficiency gains are possible via higher gasoline prices

  19. Comparative Study of Seven Commercial Kits for Human DNA Extraction from Urine Samples Suitable for DNA Biomarker-Based Public Health Studies

    Science.gov (United States)

    El Bali, Latifa; Diman, Aurélie; Bernard, Alfred; Roosens, Nancy H. C.; De Keersmaecker, Sigrid C. J.

    2014-01-01

    Human genomic DNA extracted from urine could be an interesting tool for large-scale public health studies involving characterization of genetic variations or DNA biomarkers as a result of the simple and noninvasive collection method. These studies, involving many samples, require a rapid, easy, and standardized extraction protocol. Moreover, for practicability, there is a necessity to collect urine at a moment different from the first void and to store it appropriately until analysis. The present study compared seven commercial kits to select the most appropriate urinary human DNA extraction procedure for epidemiological studies. DNA yield has been determined using different quantification methods: two classical, i.e., NanoDrop and PicoGreen, and two species-specific real-time quantitative (q)PCR assays, as DNA extracted from urine contains, besides human, microbial DNA also, which largely contributes to the total DNA yield. In addition, the kits giving a good yield were also tested for the presence of PCR inhibitors. Further comparisons were performed regarding the sampling time and the storage conditions. Finally, as a proof-of-concept, an important gene related to smoking has been genotyped using the developed tools. We could select one well-performing kit for the human DNA extraction from urine suitable for molecular diagnostic real-time qPCR-based assays targeting genetic variations, applicable to large-scale studies. In addition, successful genotyping was possible using DNA extracted from urine stored at −20°C for several months, and an acceptable yield could also be obtained from urine collected at different moments during the day, which is particularly important for public health studies. PMID:25365790

  20. Comparative study of seven commercial kits for human DNA extraction from urine samples suitable for DNA biomarker-based public health studies.

    Science.gov (United States)

    El Bali, Latifa; Diman, Aurélie; Bernard, Alfred; Roosens, Nancy H C; De Keersmaecker, Sigrid C J

    2014-12-01

    Human genomic DNA extracted from urine could be an interesting tool for large-scale public health studies involving characterization of genetic variations or DNA biomarkers as a result of the simple and noninvasive collection method. These studies, involving many samples, require a rapid, easy, and standardized extraction protocol. Moreover, for practicability, there is a necessity to collect urine at a moment different from the first void and to store it appropriately until analysis. The present study compared seven commercial kits to select the most appropriate urinary human DNA extraction procedure for epidemiological studies. DNA yield has been determined using different quantification methods: two classical, i.e., NanoDrop and PicoGreen, and two species-specific real-time quantitative (q)PCR assays, as DNA extracted from urine contains, besides human, microbial DNA also, which largely contributes to the total DNA yield. In addition, the kits giving a good yield were also tested for the presence of PCR inhibitors. Further comparisons were performed regarding the sampling time and the storage conditions. Finally, as a proof-of-concept, an important gene related to smoking has been genotyped using the developed tools. We could select one well-performing kit for the human DNA extraction from urine suitable for molecular diagnostic real-time qPCR-based assays targeting genetic variations, applicable to large-scale studies. In addition, successful genotyping was possible using DNA extracted from urine stored at -20°C for several months, and an acceptable yield could also be obtained from urine collected at different moments during the day, which is particularly important for public health studies.

  1. Does Shyness Vary According to Attained Social Roles? Trends Across Age Groups in a Large British Sample.

    Science.gov (United States)

    Van Zalk, Nejra; Lamb, Michael E; Jason Rentfrow, Peter

    2017-12-01

    The current study investigated (a) how a composite measure of shyness comprising introversion and neuroticism relates to other well-known constructs involving social fears, and (b) whether mean levels of shyness vary for men and women depending on the adoption of various social roles. Study 1 used a sample of 211 UK participants aged 17-70 (64% female; M age  = 47.90). Study 2 used data from a large cross-sectional data set with UK participants aged 17-70 (N target  = 552,663; 64% female; M age  = 34.19 years). Study 1 showed that shyness measured as a composite of introversion and neuroticism was highly correlated with other constructs involving social fears. Study 2 indicated that, controlling for various sociodemographic variables, females appeared to have higher levels, whereas males appeared to have lower levels of shyness. Males and females who were in employment had the lowest shyness levels, whereas those working in unskilled jobs had the highest levels and people working in sales the lowest levels of shyness. Participants in relationships had lower levels of shyness than those not in relationships, but parenthood was not associated with shyness. Mean levels of shyness are likely to vary according to adopted social roles, gender, and age. © 2016 Wiley Periodicals, Inc.

  2. Cycling to School Is Associated With Lower BMI and Lower Odds of Being Overweight or Obese in a Large Population-Based Study of Danish Adolescents

    DEFF Research Database (Denmark)

    Østergaard, Lars; Grøntved, Anders; Bjørkelund Børrestad, Line Anita

    2012-01-01

    Previous studies have been inconclusive concerning the effect of active transport on BMI. Our objective was to investigate the association between travel mode and BMI in a large community-based sample of Danish adolescents.......Previous studies have been inconclusive concerning the effect of active transport on BMI. Our objective was to investigate the association between travel mode and BMI in a large community-based sample of Danish adolescents....

  3. Statistical searches for microlensing events in large, non-uniformly sampled time-domain surveys: A test using palomar transient factory data

    Energy Technology Data Exchange (ETDEWEB)

    Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)

    2014-01-20

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  4. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  5. Feasibility of a large cohort study in sub-Saharan Africa assessed through a four-country study

    Directory of Open Access Journals (Sweden)

    Shona Dalal

    2015-05-01

    Full Text Available Background: Large prospective epidemiologic studies are vital in determining disease etiology and forming national health policy. Yet, such studies do not exist in sub-Saharan Africa (SSA notwithstanding the growing burden of chronic diseases. Objective: We explored the feasibility of establishing a large-scale multicountry prospective study at five sites in four sub-Saharan countries. Design: Based on country-specific considerations of feasibility, Nigeria enrolled health care professionals, South Africa and Tanzania enrolled teachers, and Uganda enrolled village residents at one rural and one periurban site each. All sites used a 6-month follow-up period but different approaches for data collection, namely standardized questionnaires filled out by participants or face-to-face interviews. Results: We enrolled 1415 participants from five sites (range 200–489 with a median age of 41 years. Approximately half had access to clean-burning cooking fuel and 70% to piped drinking water, yet 92% had access to a mobile phone. The prevalence of chronic diseases was 49% among 45- to 54-year-olds and was dominated by hypertension (21.7% overall – ranging from 4.5 to 31.2% across sites – and a serious injury in the past 12 months (12.4% overall. About 80% of participants indicated willingness to provide blood samples. At 6-month follow-up, 68% completed a questionnaire (45 to 96% across sites with evidence that mobile phones were particularly useful. Conclusions: Our pilot study indicates that a large-scale prospective study in SSA is feasible, and the burden of chronic disease in SSA may already be substantial necessitating urgent etiologic research and primary prevention.

  6. Systematic versus random sampling in stereological studies.

    Science.gov (United States)

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  7. BROAD ABSORPTION LINE DISAPPEARANCE ON MULTI-YEAR TIMESCALES IN A LARGE QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Filiz Ak, N.; Brandt, W. N.; Schneider, D. P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario M3J 1P3 (Canada); Anderson, S. F.; Gibson, R. R. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States); Lundgren, B. F. [Department of Physics, Yale University, New Haven, CT 06511 (United States); Myers, A. D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Petitjean, P. [Institut d' Astrophysique de Paris, Universite Paris 6, F-75014, Paris (France); Ross, Nicholas P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 92420 (United States); Shen Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-51, Cambridge, MA 02138 (United States); York, D. G. [Department of Astronomy and Astrophysics, and Enrico Fermi Institute, University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 (United States); Bizyaev, D.; Brinkmann, J.; Malanushenko, E.; Oravetz, D. J.; Pan, K.; Simmons, A. E. [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349-0059 (United States); Weaver, B. A., E-mail: nfilizak@astro.psu.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2012-10-01

    We present 21 examples of C IV broad absorption line (BAL) trough disappearance in 19 quasars selected from systematic multi-epoch observations of 582 bright BAL quasars (1.9 < z < 4.5) by the Sloan Digital Sky Survey-I/II (SDSS-I/II) and SDSS-III. The observations span 1.1-3.9 yr rest-frame timescales, longer than have been sampled in many previous BAL variability studies. On these timescales, Almost-Equal-To 2.3% of C IV BAL troughs disappear and Almost-Equal-To 3.3% of BAL quasars show a disappearing trough. These observed frequencies suggest that many C IV BAL absorbers spend on average at most a century along our line of sight to their quasar. Ten of the 19 BAL quasars showing C IV BAL disappearance have apparently transformed from BAL to non-BAL quasars; these are the first reported examples of such transformations. The BAL troughs that disappear tend to be those with small-to-moderate equivalent widths, relatively shallow depths, and high outflow velocities. Other non-disappearing C IV BALs in those nine objects having multiple troughs tend to weaken when one of them disappears, indicating a connection between the disappearing and non-disappearing troughs, even for velocity separations as large as 10,000-15,000 km s{sup -1}. We discuss possible origins of this connection including disk-wind rotation and changes in shielding gas.

  8. Further examination of embedded performance validity indicators for the Conners' Continuous Performance Test and Brief Test of Attention in a large outpatient clinical sample.

    Science.gov (United States)

    Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A

    2018-01-01

    Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.

  9. The study of chemical forms of mercury in human hair and other bio-environmental samples

    International Nuclear Information System (INIS)

    Kratzer, K.; Benes, P.

    1995-01-01

    The results of the continued studies on methods of analysis for forms of mercury in hair, and of the distribution of mercury among inorganic and organic forms in human hair are described. A new method for determining methylmercury has been developed, based on the selective leaching of methylmercury from hair using 2M hydrochioric acid. This method was used, in combination with the determination of mercury by atomic absorption spectrometry, for the analysis of mercury forms in three samples of human hair and two samples of fish homogenate. Good reproducibility of parallel determinations was obtained. The results were compared with literature data for the samples, or with the results of the analysis of the same samples by extraction method described earlier. Good agreement was also found between these methods. Further experiments were concerned with the study of the effect of radiation sterilization on the forms of mercury in hair, of the speciation of 203 Hg formed by irradiation of hair in nuclear reactor and with the labelling of a large batch of human hair with methylmercury. (author)

  10. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    International Nuclear Information System (INIS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-01-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)

  11. Study of lead pollution in air, soil and water samples of Quetta city

    International Nuclear Information System (INIS)

    Khan, M.; Khan, G.M.; Akbar, S.; Panezai, M.A.; Haq, Z.U.

    2011-01-01

    This study briefly presents the collected data of lead pollution in the environment of Quetta City in Balochistan, Pakistan. The samples were collected from different sites. The analysis of lead was carried out in underground water samples, the exhaust of different vehicles, roadside and sewage soils from selected points of Quetta City. The average discharge resulted in deposition by motorcycles (29.12 g/h), cars (44.47 g/h), wagons (176.54 g/h) and buses (141.52 g/h). The maximum deposition was 222.96 g/h from auto-rickshaws. The value for lead in smoke of different vehicles seems quite high when extrapolated to the large number of such vehicles for a longer time. The concentration of lead in roadside soil varied from 73.3 mg/kg (T and T closed colony) to 731.9 mg/kg (Sirki road bus-stop). The average content of lead in sewage soil of City Nala is 1250.6 mg/kg. The level of lead was more than WHO standards for such soils. The lead quantity in all 24 tube- well water samples, was slightly above the WHO standards (10 macro g/L).The results of this study were comparable to similar study in twin cities of Rawalpindi and islamabad. (author)

  12. Comparison of blood RNA isolation methods from samples stabilized in Tempus tubes and stored at a large human biobank.

    Science.gov (United States)

    Aarem, Jeanette; Brunborg, Gunnar; Aas, Kaja K; Harbak, Kari; Taipale, Miia M; Magnus, Per; Knudsen, Gun Peggy; Duale, Nur

    2016-09-01

    More than 50,000 adult and cord blood samples were collected in Tempus tubes and stored at the Norwegian Institute of Public Health Biobank for future use. In this study, we systematically evaluated and compared five blood-RNA isolation protocols: three blood-RNA isolation protocols optimized for simultaneous isolation of all blood-RNA species (MagMAX RNA Isolation Kit, both manual and semi-automated protocols; and Norgen Preserved Blood RNA kit I); and two protocols optimized for large RNAs only (Tempus Spin RNA, and Tempus 6-port isolation kit). We estimated the following parameters: RNA quality, RNA yield, processing time, cost per sample, and RNA transcript stability of six selected mRNAs and 13 miRNAs using real-time qPCR. Whole blood samples from adults (n = 59 tubes) and umbilical cord blood (n = 18 tubes) samples collected in Tempus tubes were analyzed. High-quality blood-RNAs with average RIN-values above seven were extracted using all five RNA isolation protocols. The transcript levels of the six selected genes showed minimal variation between the five protocols. Unexplained differences within the transcript levels of the 13 miRNA were observed; however, the 13 miRNAs had similar expression direction and they were within the same order of magnitude. Some differences in the RNA processing time and cost were noted. Sufficient amounts of high-quality RNA were obtained using all five protocols, and the Tempus blood RNA system therefore seems not to be dependent on one specific RNA isolation method.

  13. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  14. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative

  15. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    International Nuclear Information System (INIS)

    Shine, E. P.; Poirier, M. R.

    2013-01-01

    statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling

  16. Mars Sample Return Architecture Assessment Study

    Science.gov (United States)

    Centuori, S.; Hermosín, P.; Martín, J.; De Zaiacomo, G.; Colin, S.; Godfrey, A.; Myles, J.; Johnson, H.; Sachdev, T.; Ahmed, R.

    2018-04-01

    Current paper presents the results of ESA funded activity "Mars Sample Return Architecture Assessment Study" carried-out by DEIMOS Space, Lockheed Martin UK Ampthill, and MDA Corporation, where more than 500 mission design options have been studied.

  17. Reinforced dynamics for enhanced sampling in large atomic and molecular systems

    Science.gov (United States)

    Zhang, Linfeng; Wang, Han; E, Weinan

    2018-03-01

    A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.

  18. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  19. The 4-vessel Sampling Approach to Integrative Studies of Human Placental Physiology In Vivo.

    Science.gov (United States)

    Holme, Ane M; Holm, Maia B; Roland, Marie C P; Horne, Hildegunn; Michelsen, Trond M; Haugen, Guttorm; Henriksen, Tore

    2017-08-02

    The human placenta is highly inaccessible for research while still in utero. The current understanding of human placental physiology in vivo is therefore largely based on animal studies, despite the high diversity among species in placental anatomy, hemodynamics and duration of the pregnancy. The vast majority of human placenta studies are ex vivo perfusion studies or in vitro trophoblast studies. Although in vitro studies and animal models are essential, extrapolation of the results from such studies to the human placenta in vivo is uncertain. We aimed to study human placenta physiology in vivo at term, and present a detailed protocol of the method. Exploiting the intraabdominal access to the uterine vein just before the uterine incision during planned cesarean section, we collect blood samples from the incoming and outgoing vessels on the maternal and fetal sides of the placenta. When combining concentration measurements from blood samples with volume blood flow measurements, we are able to quantify placental and fetal uptake and release of any compound. Furthermore, placental tissue samples from the same mother-fetus pairs can provide measurements of transporter density and activity and other aspects of placental functions in vivo. Through this integrative use of the 4-vessel sampling method we are able to test some of the current concepts of placental nutrient transfer and metabolism in vivo, both in normal and pathological pregnancies. Furthermore, this method enables the identification of substances secreted by the placenta to the maternal circulation, which could be an important contribution to the search for biomarkers of placenta dysfunction.

  20. Large Truck Crash Causation Study (LTCCS)

    Data.gov (United States)

    Department of Transportation — The Large Truck* Crash Causation Study (LTCCS) is based on a three-year data collection project conducted by the Federal Motor Carrier Safety Administration (FMCSA)...

  1. Clustering of risk-related modifiable behaviours and their association with overweight and obesity among a large sample of youth in the COMPASS study.

    Science.gov (United States)

    Laxer, Rachel E; Brownson, Ross C; Dubin, Joel A; Cooke, Martin; Chaurasia, Ashok; Leatherdale, Scott T

    2017-01-21

    Canadian youth exhibit a number of risky behaviours, some of which are associated with overweight and obesity. The purpose of this study was to examine the prevalence of 15 modifiable risk behaviours in a large sample of Canadian youth, to identify underlying subgroups based on patterns of health behaviours, and to examine the association between identified subgroups and overweight/obesity. Data from 18,587 grades 9-12 students in Year 1 (2012-13) of the COMPASS study and latent class analysis were used to identify patterns and clustering among 15 health behaviours (e.g., physical inactivity, sedentary behaviour, unhealthy eating, substance use). A logistic regression model examined the associations between these clusters and overweight/obesity status. Four distinct classes were identified: traditional school athletes, inactive screenagers, health conscious, and moderately active substance users. Each behavioural cluster demonstrated a distinct pattern of behaviours, some with a greater number of risk factors than others. Traditional school athletes (odds ratio (OR) 1.15, 95% CI 1.03-1.29), inactive screenagers (OR 1.33; 1.19-1.48), and moderately active substance users (OR 1.27; 1.14-1.43) were all significantly more likely to be overweight/obese compared to the health conscious group. Four distinct subpopulations of youth were identified based on their patterns of health and risk behaviours. The three clusters demonstrating poorer health behaviour were all at an increased risk of being overweight/obese compared to their somewhat healthier peers. Obesity-related public health interventions and health promotion efforts might be more effective if consideration is given to population segments with certain behavioural patterns, targeting subgroups at greatest risk of overweight or obesity.

  2. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    Science.gov (United States)

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  4. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  5. CASP10-BCL::Fold efficiently samples topologies of large proteins.

    Science.gov (United States)

    Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens

    2015-03-01

    During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.

  6. Large-scale alcohol use and socioeconomic position of origin: longitudinal study from ages 15 to 19 years

    DEFF Research Database (Denmark)

    Andersen, Anette; Holstein, Bjørn E; Due, Pernille

    2008-01-01

    AIM: To examine socioeconomic position (SEP) of origin as predictor of large-scale alcohol use in adolescence. METHODS: The study population was a random sample of 15-year-olds at baseline (n=843) with a first follow-up 4 years later (n=729). Excess alcohol intake was assessed by consumption last...

  7. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  8. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  9. Toward Rapid Unattended X-ray Tomography of Large Planar Samples at 50-nm Resolution

    International Nuclear Information System (INIS)

    Rudati, J.; Tkachuk, A.; Gelb, J.; Hsu, G.; Feng, Y.; Pastrick, R.; Lyon, A.; Trapp, D.; Beetz, T.; Chen, S.; Hornberger, B.; Seshadri, S.; Kamath, S.; Zeng, X.; Feser, M.; Yun, W.; Pianetta, P.; Andrews, J.; Brennan, S.; Chu, Y. S.

    2009-01-01

    X-ray tomography at sub-50 nm resolution of small areas (∼15 μmx15 μm) are routinely performed with both laboratory and synchrotron sources. Optics and detectors for laboratory systems have been optimized to approach the theoretical efficiency limit. Limited by the availability of relatively low-brightness laboratory X-ray sources, exposure times for 3-D data sets at 50 nm resolution are still many hours up to a full day. However, for bright synchrotron sources, the use of these optimized imaging systems results in extremely short exposure times, approaching live-camera speeds at the Advanced Photon Source at Argonne National Laboratory near Chicago in the US These speeds make it possible to acquire a full tomographic dataset at 50 nm resolution in less than a minute of true X-ray exposure time. However, limits in the control and positioning system lead to large overhead that results in typical exposure times of ∼15 min currently.We present our work on the reduction and elimination of system overhead and toward complete automation of the data acquisition process. The enhancements underway are primarily to boost the scanning rate, sample positioning speed, and illumination homogeneity to performance levels necessary for unattended tomography of large areas (many mm 2 in size). We present first results on this ongoing project.

  10. Sample holder for studying temperature dependent particle guiding

    International Nuclear Information System (INIS)

    Bereczky, R.J.; Toekesi, K.; Kowarik, G.; Aumayr, F.

    2011-01-01

    Complete text of publication follows. The so called guiding effect is a complex process involving the interplay of a large number of charged particles with a solid. Although many research groups joined this field and carried out various experiments with insulator capillaries many details of the interactions are still unknown. We investigated the temperature dependence of the guiding since it opens new possibilities both for a fundamental understanding of the guiding phenomenon and for applications. For the temperature dependent guiding experiments a completely new heatable sample holder was designed. We developed and built such a heatable sample holder to make accurate and reproducible studies of the temperature dependence of the ion guiding effect possible. The target holder (for an exploded view see Fig. 1) consists of two main parts, the front and the back plates. The two plates of the sample holder, which function as an oven, are made of copper. These parts surround the capillary in order to guarantee a uniform temperature along the whole tube. The temperature of the copper parts is monitored by a K-Type thermocouple. Stainless steel coaxial heaters surrounding the oven are used for heating. The heating power up to a few watts is regulated by a PID controller. Cooling of the capillary is achieved by a copper feed-through connected to a liquid nitrogen bath outside the UHV chamber. This solution allows us to change the temperature of the sample from -30 deg C up to 90 deg C. Our experiments with this newly developed temperature regulated capillary holder show that the glass temperature (i.e. conductivity) can be used to control the guiding properties of the glass capillary and adjust the conditions from guiding at room temperature to simple geometrical transmission at elevated temperatures. This holds the promise to investigate the effect of conductivity on particle transport (build-up and removal of charge patches) through capillaries in more details

  11. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  12. Study of phosphors determination in biological samples

    International Nuclear Information System (INIS)

    Oliveira, Rosangela Magda de.

    1994-01-01

    In this paper, phosphors determination by neutron activation analysis in milk and bone samples was studied employing both instrumental and radiochemical separation methods. The analysis with radiochemistry separation consisted of the simultaneous irradiation of the samples and standards during 30 minutes, dissolution of the samples, hold back carrier, addition precipitation of phosphorus with ammonium phosphomolibdate (A.M.P.) and phosphorus-32 by counting by using Geiger-Mueller detector. The instrumental analysis consisted of the simultaneous irradiation of the samples and standards during 30 minutes, transfer of the samples into a counting planchet and measurement of the beta radiation emitted by phosphorus-32, after a suitable decay period. After the phosphorus analysis methods were established they were applied to both commercial milk and animal bone samples, and data obtained in the instrumental and radiochemical separation methods for each sample, were compared between themselves. In this work, it became possible to obtain analysis methods for phosphorus that can be applied independently of the sample quantity available, and the phosphorus content in the samples or interference that can be present in them. (author). 51 refs., 7 figs., 4 tabs

  13. Dental anomalies: prevalence and associations between them in a large sample of non-orthodontic subjects, a cross-sectional study.

    Science.gov (United States)

    Laganà, G; Venza, N; Borzabadi-Farahani, A; Fabi, F; Danesi, C; Cozza, P

    2017-03-11

    To analyze the prevalence and associations between dental anomalies detectable on panoramic radiographs in a sample of non-orthodontic growing subjects. For this cross-sectional study, digital panoramic radiographs of 5005 subjects were initially screened from a single radiographic center in Rome. Inclusion criteria were: subjects who were aged 8-12 years, Caucasian, and had good diagnostic quality radiographs. Syndromic subjects, those with craniofacial malformation, or orthodontic patients were excluded and this led to a sample of 4706 subjects [mean (SD) age = 9.6 (1.2) years, 2366 males and 2340 females]. Sample was subsequently divided into four subgroups (8, 9, 10, and 11-12 year-old groups). Two operators examined panoramic radiographs to observe the presence of common dental anomalies. The prevalence and associations between dental anomalies were also investigated. The overall prevalence of dental anomalies was 20.9%. Approximately, 17.9% showed only one anomaly, 2.7% two anomalies, while only 0.3% had more than two anomalies. The most frequent anomalies were the displacement of maxillary canine (7.5%), hypodontia (7.1%), impacted teeth (3.9%), tooth ankylosis (2.8%), and tooth transposition (1.4%). The lower right second premolar was the most frequent missing teeth; 3.7% had only one tooth agenesis, and 0.08% had six or more missing tooth (Oligodontia). Mesiodens was the most common type of supernumerary tooth (0.66%). Two subjects had taurodontic tooth (0.04%). Tooth transpositions and displacement of maxillary canine were seen in 1.4 and 7.5%, retrospectively (approximately 69 and 58% were in the 8 and 9 year-old groups, retrospectively). Significant associations were detected between the different dental anomalies (P dental anomalies and provide further evidences to support common etiological factors.

  14. Using Extreme Phenotype Sampling to Identify the Rare Causal Variants of Quantitative Traits in Association Studies

    OpenAIRE

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David

    2011-01-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach fo...

  15. Mind-Body Practice and Body Weight Status in a Large Population-Based Sample of Adults.

    Science.gov (United States)

    Camilleri, Géraldine M; Méjean, Caroline; Bellisle, France; Hercberg, Serge; Péneau, Sandrine

    2016-04-01

    In industrialized countries characterized by a high prevalence of obesity and chronic stress, mind-body practices such as yoga or meditation may facilitate body weight control. However, virtually no data are available to ascertain whether practicing mind-body techniques is associated with weight status. The purpose of this study is to examine the relationship between the practice of mind-body techniques and weight status in a large population-based sample of adults. A total of 61,704 individuals aged ≥18 years participating in the NutriNet-Santé study (2009-2014) were included in this cross-sectional analysis conducted in 2014. Data on mind-body practices were collected, as well as self-reported weight and height. The association between the practice of mind-body techniques and weight status was assessed using multiple linear and multinomial logistic regression models adjusted for sociodemographic, lifestyle, and dietary factors. After adjusting for sociodemographic and lifestyle factors, regular users of mind-body techniques were less likely to be overweight (OR=0.68, 95% CI=0.63, 0.74) or obese (OR=0.55, 95% CI=0.50, 0.61) than never users. In addition, regular users had a lower BMI than never users (-3.19%, 95% CI=-3.71, -2.68). These data provide novel information about an inverse relationship between mind-body practice and weight status. If causal links were demonstrated in further prospective studies, such practice could be fostered in obesity prevention and treatment. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  16. A simulative comparison of respondent driven sampling with incentivized snowball sampling – the “strudel effect”

    Science.gov (United States)

    Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.

    2014-01-01

    Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650

  17. Psychological Predictors of Seeking Help from Mental Health Practitioners among a Large Sample of Polish Young Adults

    Directory of Open Access Journals (Sweden)

    Lidia Perenc

    2016-10-01

    Full Text Available Although the corresponding literature contains a substantial number of studies on the relationship between psychological factors and attitude towards seeking professional psychological help, the role of some determinants remains unexplored, especially among Polish young adults. The present study investigated diversity among a large cohort of Polish university students related to attitudes towards help-seeking and the regulative roles of gender, level of university education, health locus of control and sense of coherence. The total sample comprised 1706 participants who completed the following measures: Attitude Toward Seeking Professional Psychological Help Scale-SF, Multidimensional Health Locus of Control Scale, and Orientation to Life Questionnaire (SOC-29. They were recruited from various university faculties and courses by means of random selection. The findings revealed that, among socio-demographic variables, female gender moderately and graduate of university study strongly predict attitude towards seeking help. Internal locus of control and all domains of sense of coherence are significantly correlated with the scores related to the help-seeking attitude. Attitudes toward psychological help-seeking are significantly related to female gender, graduate university education, internal health locus of control and sense of coherence. Further research must be performed in Poland in order to validate these results in different age and social groups.

  18. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  19. 99Mo Yield Using Large Sample Mass of MoO3 for Sustainable Production of 99Mo

    Science.gov (United States)

    Tsukada, Kazuaki; Nagai, Yasuki; Hashimoto, Kazuyuki; Kawabata, Masako; Minato, Futoshi; Saeki, Hideya; Motoishi, Shoji; Itoh, Masatoshi

    2018-04-01

    A neutron source from the C(d,n) reaction has the unique capability of producing medical radioisotopes such as 99Mo with a minimum level of radioactive waste. Precise data on the neutron flux are crucial to determine the best conditions for obtaining the maximum yield of 99Mo. The measured yield of 99Mo produced by the 100Mo(n,2n)99Mo reaction from a large sample mass of MoO3 agrees well with the numerical result estimated with the latest neutron data, which are a factor of two larger than the other existing data. This result establishes an important finding for the domestic production of 99Mo: approximately 50% of the demand for 99Mo in Japan could be met using a 100 g 100MoO3 sample mass with a single accelerator of 40 MeV, 2 mA deuteron beams.

  20. Sampling study in milk storage tanks by INAA

    International Nuclear Information System (INIS)

    Santos, L.G.C.; Nadai Fernandes de, E.A.; Bacchi, M.A.; Tagliaferro, F.S.

    2008-01-01

    This study investigated the representativeness of samples for assessing chemical elements in milk bulk tanks. Milk samples were collected from a closed tank in a dairy plant and from an open top tank in a dairy farm. Samples were analyzed for chemical elements by instrumental neutron activation analysis (INAA). For both experiments, Br, Ca, Cs, K, Na, Rb and Zn did not present significant differences between samples thereby indicating the appropriateness of the sampling procedure adopted to evaluate the analytes of interest. (author)

  1. Experimental study of UC polycrystals in the prospect of improving the as-fabricated sample purity

    Energy Technology Data Exchange (ETDEWEB)

    Raveu, Gaëlle, E-mail: gaelle.raveu@cea.fr [CEA, DEC, 13108 Saint-Paul-Lez-Durance (France); Martin, Guillaume; Fiquet, Olivier; Garcia, Philippe; Carlot, Gaëlle; Palancher, Hervé [CEA, DEC, 13108 Saint-Paul-Lez-Durance (France); Bonnin, Anne [ESRF, 6, rue J. Horowitz, 38500 Grenoble Cedex (France); Khodja, Hicham [CEA, DEC, 13108 Saint-Paul-Lez-Durance (France); Raepsaet, Caroline [CEA, IRAMIS, LEEL, 91191 Gif-Sur-Yvette (France); Sauvage, Thierry; Barthe, Marie-France [CNRS – CEMHTI, 3a Rue de la Férolerie, 45071 Orleans (France)

    2014-12-15

    Uranium and plutonium carbides are candidate fuels for Generation IV nuclear reactors. This study is focused on the characterization of uranium monocarbide samples. The successive fabrication steps were carried out under atmospheres containing low oxygen and moisture concentrations (typically less than 100 ppm) but sample transfers occurred in air. Six samples were sliced from four pellets elaborated by carbothermic reaction under vacuum. Little presence of UC{sub 2} is expected in these samples. The α-UC{sub 2} phase was indeed detected within one of these UC samples during an XRD experiment performed with synchrotron radiation. Moreover, oxygen content at the surface of these samples was depth profiled using a recently developed nuclear reaction analysis method. Large oxygen concentrations were measured in the first micron below the sample surface and particularly in the first 100–150 nm. UC{sub 2} inclusions were found to be more oxidized than the surrounding matrix. This work points out to the fact that more care must be given at each step of UC fabrication since the material readily reacts with oxygen and moisture. A new glovebox facility using a highly purified atmosphere is currently being built in order to obtain single phase UC samples of better purity.

  2. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  3. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  4. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  5. Preliminary study on element mass fraction determination on catfish samples from Paraguay

    International Nuclear Information System (INIS)

    Moreira, Edson G.; Catharino, Marilia G.M.; Vasconcellos, Marina B.A.; Frutos, Sixto A.; Insaurralde, Mario S.

    2013-01-01

    South American catfish (Pseudoplatystoma), commonly known in Spanish as atigrado or surubi and in Portuguese as surubim or pintado is a large fish that typically reaches 1 m long and weighs 60 kg to 80 kg and may be found at the basins of the Amazon, the Sao Francisco and de la Plata rivers, usually in riverbeds and deep wells. Being a much appreciated fish for human consumption, it is quite sought after by fishermen who have been contributing to the reduction of the stocks. This fact attracted the attention of the Paraguayan authorities to the point of imposing restrictions to free fishing and commercialization. This study aims to assist the conservation efforts towards this fish by investigating its exposure to possible pollutants. Preliminary results on element determination on six samples of catfish from Paraguayan rivers are presented. Cs, Co, Fe, Se and Zn were determined by applying an Instrumental Neutron Activation Analysis method. While these element levels were lower than the legislation for human consumption, the elements As, Cr e La were not detected in the samples as they are below the detection limit of the method employed. Atomic Absorption Spectrometry was used to investigate the presence of Cd, Hg and Pb in the samples. Hg was detected in the samples while Cd and Pb were below the detection limit of the method. (author)

  6. Preliminary study on element mass fraction determination on catfish samples from Paraguay

    Energy Technology Data Exchange (ETDEWEB)

    Moreira, Edson G.; Catharino, Marilia G.M.; Vasconcellos, Marina B.A., E-mail: emoreira@ipen.br, E-mail: mbvascon@ipen.br, E-mail: mariliasemmler@uol.com.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Frutos, Sixto A.; Insaurralde, Mario S., E-mail: tony8013@hotmail.com, E-mail: insaurraldemar9@hotmail.com [Universidad Nacional de Asuncion (FCV/UNA), San Lorenzo (Paraguay). Facultad de Ciencias Veterinarias. Departamento de Pesca y Acuicultura

    2013-07-01

    South American catfish (Pseudoplatystoma), commonly known in Spanish as atigrado or surubi and in Portuguese as surubim or pintado is a large fish that typically reaches 1 m long and weighs 60 kg to 80 kg and may be found at the basins of the Amazon, the Sao Francisco and de la Plata rivers, usually in riverbeds and deep wells. Being a much appreciated fish for human consumption, it is quite sought after by fishermen who have been contributing to the reduction of the stocks. This fact attracted the attention of the Paraguayan authorities to the point of imposing restrictions to free fishing and commercialization. This study aims to assist the conservation efforts towards this fish by investigating its exposure to possible pollutants. Preliminary results on element determination on six samples of catfish from Paraguayan rivers are presented. Cs, Co, Fe, Se and Zn were determined by applying an Instrumental Neutron Activation Analysis method. While these element levels were lower than the legislation for human consumption, the elements As, Cr e La were not detected in the samples as they are below the detection limit of the method employed. Atomic Absorption Spectrometry was used to investigate the presence of Cd, Hg and Pb in the samples. Hg was detected in the samples while Cd and Pb were below the detection limit of the method. (author)

  7. Validation of the MOS Social Support Survey 6-item (MOS-SSS-6) measure with two large population-based samples of Australian women.

    Science.gov (United States)

    Holden, Libby; Lee, Christina; Hockey, Richard; Ware, Robert S; Dobson, Annette J

    2014-12-01

    This study aimed to validate a 6-item 1-factor global measure of social support developed from the Medical Outcomes Study Social Support Survey (MOS-SSS) for use in large epidemiological studies. Data were obtained from two large population-based samples of participants in the Australian Longitudinal Study on Women's Health. The two cohorts were aged 53-58 and 28-33 years at data collection (N = 10,616 and 8,977, respectively). Items selected for the 6-item 1-factor measure were derived from the factor structure obtained from unpublished work using an earlier wave of data from one of these cohorts. Descriptive statistics, including polychoric correlations, were used to describe the abbreviated scale. Cronbach's alpha was used to assess internal consistency and confirmatory factor analysis to assess scale validity. Concurrent validity was assessed using correlations between the new 6-item version and established 19-item version, and other concurrent variables. In both cohorts, the new 6-item 1-factor measure showed strong internal consistency and scale reliability. It had excellent goodness-of-fit indices, similar to those of the established 19-item measure. Both versions correlated similarly with concurrent measures. The 6-item 1-factor MOS-SSS measures global functional social support with fewer items than the established 19-item measure.

  8. A large-scale study of misophonia

    NARCIS (Netherlands)

    Rouw, R.; Erfanian, M.

    2018-01-01

    Objective We aim to elucidate misophonia, a condition in which particular sounds elicit disproportionally strong aversive reactions. Method A large online study extensively surveyed personal, developmental, and clinical characteristics of over 300 misophonics. Results Most participants indicated

  9. Self-esteem development across the life span: a longitudinal study with a large sample from Germany.

    Science.gov (United States)

    Orth, Ulrich; Maes, Jürgen; Schmitt, Manfred

    2015-02-01

    The authors examined the development of self-esteem across the life span. Data came from a German longitudinal study with 3 assessments across 4 years of a sample of 2,509 individuals ages 14 to 89 years. The self-esteem measure used showed strong measurement invariance across assessments and birth cohorts. Latent growth curve analyses indicated that self-esteem follows a quadratic trajectory across the life span, increasing during adolescence, young adulthood, and middle adulthood, reaching a peak at age 60 years, and then declining in old age. No cohort effects on average levels of self-esteem or on the shape of the trajectory were found. Moreover, the trajectory did not differ across gender, level of education, or for individuals who had lived continuously in West versus East Germany (i.e., the 2 parts of Germany that had been separate states from 1949 to 1990). However, the results suggested that employment status, household income, and satisfaction in the domains of work, relationships, and health contribute to a more positive life span trajectory of self-esteem. The findings have significant implications, because they call attention to developmental stages in which individuals may be vulnerable because of low self-esteem (such as adolescence and old age) and to factors that predict successful versus problematic developmental trajectories. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  10. Clustering of risk-related modifiable behaviours and their association with overweight and obesity among a large sample of youth in the COMPASS study

    Directory of Open Access Journals (Sweden)

    Rachel E. Laxer

    2017-01-01

    Full Text Available Abstract Background Canadian youth exhibit a number of risky behaviours, some of which are associated with overweight and obesity. The purpose of this study was to examine the prevalence of 15 modifiable risk behaviours in a large sample of Canadian youth, to identify underlying subgroups based on patterns of health behaviours, and to examine the association between identified subgroups and overweight/obesity. Methods Data from 18,587 grades 9–12 students in Year 1 (2012–13 of the COMPASS study and latent class analysis were used to identify patterns and clustering among 15 health behaviours (e.g., physical inactivity, sedentary behaviour, unhealthy eating, substance use. A logistic regression model examined the associations between these clusters and overweight/obesity status. Results Four distinct classes were identified: traditional school athletes, inactive screenagers, health conscious, and moderately active substance users. Each behavioural cluster demonstrated a distinct pattern of behaviours, some with a greater number of risk factors than others. Traditional school athletes (odds ratio (OR 1.15, 95% CI 1.03–1.29, inactive screenagers (OR 1.33; 1.19–1.48, and moderately active substance users (OR 1.27; 1.14–1.43 were all significantly more likely to be overweight/obese compared to the health conscious group. Conclusions Four distinct subpopulations of youth were identified based on their patterns of health and risk behaviours. The three clusters demonstrating poorer health behaviour were all at an increased risk of being overweight/obese compared to their somewhat healthier peers. Obesity-related public health interventions and health promotion efforts might be more effective if consideration is given to population segments with certain behavioural patterns, targeting subgroups at greatest risk of overweight or obesity.

  11. The Effects of Organizational Justice on Positive Organizational Behavior: Evidence from a Large-Sample Survey and a Situational Experiment

    Science.gov (United States)

    Pan, Xiaofu; Chen, Mengyan; Hao, Zhichao; Bi, Wenfen

    2018-01-01

    Employees' positive organizational behavior (POB) is not only to promote organizational function but also improve individual and organizational performance. As an important concept in organizational research, organizational justice is thought to be a universal predictor of employee and organizational outcomes. The current set of two studies examined the effects of organizational justice (OJ) on POB of employees with two different studies, a large-sample survey and a situational experiment. In study 1, a total of 2,566 employees from 45 manufacturing enterprises completed paper-and-pencil questionnaires assessing organizational justice (OJ) and positive organizational behavior (POB) of employees. In study 2, 747 employees were randomly sampled to participate in the situational experiment with 2 × 2 between-subjects design. They were asked to read one of the four situational stories and to image that this situation happen to the person in the story or them, and then they were asked to imagine how the person in the story or they would have felt and what the person or they subsequently would have done. The results of study 1 suggested that OJ was correlated with POB of employees and OJ is a positive predictor of POB. The results of study 2 suggested that OJ had significant effects on POB and negative organizational behavior (NOB). Procedural justice accounted for significantly more variance than distributive justice in POB of employees. Distributive justice and procedural justice have different influences on POB and NOB in terms of effectiveness and direction. The effect of OJ on POB was greater than that of NOB. In addition, path analysis indicated that the direct effect of OJ on POB was smaller than its indirect effect. Thus, many intermediary effects could possibly be between them. PMID:29375434

  12. The Effects of Organizational Justice on Positive Organizational Behavior: Evidence from a Large-Sample Survey and a Situational Experiment.

    Science.gov (United States)

    Pan, Xiaofu; Chen, Mengyan; Hao, Zhichao; Bi, Wenfen

    2017-01-01

    Employees' positive organizational behavior (POB) is not only to promote organizational function but also improve individual and organizational performance. As an important concept in organizational research, organizational justice is thought to be a universal predictor of employee and organizational outcomes. The current set of two studies examined the effects of organizational justice (OJ) on POB of employees with two different studies, a large-sample survey and a situational experiment. In study 1, a total of 2,566 employees from 45 manufacturing enterprises completed paper-and-pencil questionnaires assessing organizational justice (OJ) and positive organizational behavior (POB) of employees. In study 2, 747 employees were randomly sampled to participate in the situational experiment with 2 × 2 between-subjects design. They were asked to read one of the four situational stories and to image that this situation happen to the person in the story or them, and then they were asked to imagine how the person in the story or they would have felt and what the person or they subsequently would have done. The results of study 1 suggested that OJ was correlated with POB of employees and OJ is a positive predictor of POB. The results of study 2 suggested that OJ had significant effects on POB and negative organizational behavior (NOB). Procedural justice accounted for significantly more variance than distributive justice in POB of employees. Distributive justice and procedural justice have different influences on POB and NOB in terms of effectiveness and direction. The effect of OJ on POB was greater than that of NOB. In addition, path analysis indicated that the direct effect of OJ on POB was smaller than its indirect effect. Thus, many intermediary effects could possibly be between them.

  13. Spearman's "law of diminishing returns" and the role of test reliability investigated in a large sample of Danish military draftees

    DEFF Research Database (Denmark)

    Teasdale, Thomas William; Hartmann, P.

    2005-01-01

    The present article investigates Spearman's "Law of Diminishing Returns" (SLODR), which hypothesizes that the g saturation for cognitive tests is lower for high ability subjects than for low ability subjects. This hypothesis was tested in a large sample of Danish military draftees (N = 6757) who...... were representative of the young adult male population, aged 18-19, and tested with a group-administered intelligence test comprised of four subtests. The aim of the study was twofold. The first was to reproduce previous SLODR findings by the present authors. This was done by replicating...... in reliability could account for the difference in g saturation across ability groups. The results showed that the reliability was larger for the High ability group, thereby not explaining the present findings....

  14. Lunar sample studies

    International Nuclear Information System (INIS)

    1977-01-01

    Lunar samples discussed and the nature of their analyses are: (1) an Apollo 15 breccia which is thoroughly analyzed as to the nature of the mature regolith from which it derived and the time and nature of the lithification process, (2) two Apollo 11 and one Apollo 12 basalts analyzed in terms of chemistry, Cross-Iddings-Pirsson-Washington norms, mineralogy, and petrography, (3) eight Apollo 17 mare basalts, also analyzed in terms of chemistry, Cross-Iddings-Pirsson-Washington norms, mineralogy, and petrography. The first seven are shown to be chemically similar although of two main textural groups; the eighth is seen to be distinct in both chemistry and mineralogy, (4) a troctolitic clast from a Fra Mauro breccia, analyzed and contrasted with other high-temperature lunar mineral assemblages. Two basaltic clasts from the same breccia are shown to have affinities with rock 14053, and (5) the uranium-thorium-lead systematics of three Apollo 16 samples are determined; serious terrestrial-lead contamination of the first two samples is attributed to bandsaw cutting in the lunar curatorial facility

  15. Everyday temptations: an experience sampling study of desire, conflict, and self-control.

    Science.gov (United States)

    Hofmann, Wilhelm; Baumeister, Roy F; Förster, Georg; Vohs, Kathleen D

    2012-06-01

    How often and how strongly do people experience desires, to what extent do their desires conflict with other goals, and how often and successfully do people exercise self-control to resist their desires? To investigate desire and attempts to control desire in everyday life, we conducted a large-scale experience sampling study based on a conceptual framework integrating desire strength, conflict, resistance (use of self-control), and behavior enactment. A sample of 205 adults wore beepers for a week. They furnished 7,827 reports of desire episodes and completed personality measures of behavioral inhibition system/behavior activation system (BIS/BAS) sensitivity, trait self-control, perfectionism, and narcissistic entitlement. Results suggest that desires are frequent, variable in intensity, and largely unproblematic. Those urges that do conflict with other goals tend to elicit resistance, with uneven success. Desire strength, conflict, resistance, and self-regulatory success were moderated in multiple ways by personality variables as well as by situational and interpersonal factors such as alcohol consumption, the mere presence of others, and the presence of others who already had enacted the desire in question. Whereas personality generally had a stronger impact on the dimensions of desire that emerged early in its course (desire strength and conflict), situational factors showed relatively more influence on components later in the process (resistance and behavior enactment). In total, these findings offer a novel and detailed perspective on the nature of everyday desires and associated self-regulatory successes and failures. 2012 APA, all rights reserved

  16. Experimental Study and Mathematical Modeling of Asphaltene Deposition Mechanism in Core Samples

    Directory of Open Access Journals (Sweden)

    Jafari Behbahani T.

    2015-11-01

    Full Text Available In this work, experimental studies were conducted to determine the effect of asphaltene deposition on the permeability reduction and porosity reduction of carbonate, sandstone and dolomite rock samples using an Iranian bottom hole live oil sample which is close to reservoir conditions, whereas in the majority of previous work, a mixture of recombined oil (a mixture of dead oil and associated gas was injected into a core sample which is far from reservoir conditions. The effect of the oil injection rate on asphaltene deposition and permeability reduction was studied. The experimental results showed that an increase in the oil injection flow rate can result in an increase in asphaltene deposition and permeability reduction. Also, it can be observed that at lower injection flow rates, a monotonic decrease in permeability of the rock samples can be attained upon increasing the injection flow rate, while at higher injection rates, after a decrease in rock permeability, an increasing trend is observed before a steady-state condition can be reached. The experimental results also showed that the rock type can affect the amount of asphaltene deposition, and the asphaltene deposition has different mechanisms in sandstone and carbonate core samples. It can be seen that the adsorption and plugging mechanisms have a more important role in asphaltene deposition in carbonate core samples than sandstone core samples. From the results, it can be observed that the pore volumes of the injected crude oil are higher for sandstone cores compared with the carbonate cores. Also, it can be inferred that three depositional types may take place during the crude oil injection, i.e., continuous deposition for low-permeability cores, slow, steady plugging for high-permeability cores and steady deposition for medium-permeability cores. It can be seen from the experimental results that damage to the core samples was found to increase when the production pressures were

  17. Large Aspect Ratio Tokamak Study

    International Nuclear Information System (INIS)

    Reid, R.L.; Holmes, J.A.; Houlberg, W.A.; Peng, Y.K.M.; Strickler, D.J.; Brown, T.G.; Wiseman, G.W.

    1980-06-01

    The Large Aspect Ratio Tokamak Study (LARTS) at Oak Ridge National Laboratory (ORNL) investigated the potential for producing a viable longburn tokamak reactor by enhancing the volt-second capability of the ohmic heating transformer through the use of high aspect ratio designs. The plasma physics, engineering, and economic implications of high aspect ratio tokamaks were assessed in the context of extended burn operation. Using a one-dimensional transport code plasma startup and burn parameters were addressed. The pulsed electrical power requirements for the poloidal field system, which have a major impact on reactor economics, were minimized by optimizing the startup and shutdown portions of the tokamak cycle. A representative large aspect ratio tokamak with an aspect ratio of 8 was found to achieve a burn time of 3.5 h at capital cost only approx. 25% greater than that of a moderate aspect ratio design tokamak

  18. Sampling design and procedures for fixed surface-water sites in the Georgia-Florida coastal plain study unit, 1993

    Science.gov (United States)

    Hatzell, H.H.; Oaksford, E.T.; Asbury, C.E.

    1995-01-01

    The implementation of design guidelines for the National Water-Quality Assessment (NAWQA) Program has resulted in the development of new sampling procedures and the modification of existing procedures commonly used in the Water Resources Division of the U.S. Geological Survey. The Georgia-Florida Coastal Plain (GAFL) study unit began the intensive data collection phase of the program in October 1992. This report documents the implementation of the NAWQA guidelines by describing the sampling design and procedures for collecting surface-water samples in the GAFL study unit in 1993. This documentation is provided for agencies that use water-quality data and for future study units that will be entering the intensive phase of data collection. The sampling design is intended to account for large- and small-scale spatial variations, and temporal variations in water quality for the study area. Nine fixed sites were selected in drainage basins of different sizes and different land-use characteristics located in different land-resource provinces. Each of the nine fixed sites was sampled regularly for a combination of six constituent groups composed of physical and chemical constituents: field measurements, major ions and metals, nutrients, organic carbon, pesticides, and suspended sediments. Some sites were also sampled during high-flow conditions and storm events. Discussion of the sampling procedure is divided into three phases: sample collection, sample splitting, and sample processing. A cone splitter was used to split water samples for the analysis of the sampling constituent groups except organic carbon from approximately nine liters of stream water collected at four fixed sites that were sampled intensively. An example of the sample splitting schemes designed to provide the sample volumes required for each sample constituent group is described in detail. Information about onsite sample processing has been organized into a flowchart that describes a pathway for each of

  19. The suicidality continuum in a large sample of obsessive-compulsive disorder (OCD) patients.

    Science.gov (United States)

    Velloso, P; Piccinato, C; Ferrão, Y; Aliende Perin, E; Cesar, R; Fontenelle, L; Hounie, A G; do Rosário, M C

    2016-10-01

    Obsessive-compulsive disorder (OCD) has a chronic course leading to huge impact in the patient's functioning. Suicidal thoughts and attempts are much more frequent in OCD subjects than once thought before. To empirically investigate whether the suicidal phenomena could be analyzed as a suicidality severity continuum and its association with obsessive-compulsive (OC) symptom dimensions and quality of life (QoL), in a large OCD sample. Cross-sectional study with 548 patients diagnosed with OCD according to the DSM-IV criteria, interviewed in the Brazilian OCD Consortium (C-TOC) sites. Patients were evaluated by OCD experts using standardized instruments including: Yale-Brown Obsessive-Compulsive Scale (YBOCS); Dimensional Yale-Brown Obsessive-Compulsive Scale (DYBOCS); Beck Depression and Anxiety Inventories; Structured Clinical Interview for DSM-IV (SCID); and the SF-36 QoL Health Survey. There were extremely high correlations between all the suicidal phenomena. OCD patients with suicidality had significantly lower QoL, higher severity in the "sexual/religious", "aggression" and "symmetry/ordering" OC symptom dimensions, higher BDI and BA scores and a higher frequency of suicide attempts in a family member. In the regression analysis, the factors that most impacted suicidality were the sexual dimension severity, the SF-36 QoL Mental Health domain, the severity of depressive symptoms and a relative with an attempted suicide history. Suicidality could be analyzed as a severity continuum and patients should be carefully monitored since they present with suicidal ideation. Lower QoL scores, higher scores on the sexual dimension and a family history of suicide attempts should be considered as risk factors for suicidality among OCD patients. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  20. Study of depression influencing factors with zero-inflated regression models in a large-scale population survey

    OpenAIRE

    Xu, Tao; Zhu, Guangjin; Han, Shaomei

    2017-01-01

    Objectives The number of depression symptoms can be considered as count data in order to get complete and accurate analyses findings in studies of depression. This study aims to compare the goodness of fit of four count outcomes models by a large survey sample to identify the optimum model for a risk factor study of the number of depression symptoms. Methods 15 820 subjects, aged 10 to 80 years old, who were not suffering from serious chronic diseases and had not run a high fever in the past ...

  1. Sampling challenges in a study examining refugee resettlement.

    Science.gov (United States)

    Sulaiman-Hill, Cheryl Mr; Thompson, Sandra C

    2011-03-15

    As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and

  2. Sampling challenges in a study examining refugee resettlement

    Directory of Open Access Journals (Sweden)

    Thompson Sandra C

    2011-03-01

    Full Text Available Abstract Background As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment Methods A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. Results A sample of 193 former refugee participants was recruited in Christchurch (n = 98 and Perth (n = 95, 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48% was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Conclusions Snowball sampling, with multiple initiation points to reduce selection bias, was

  3. Metabolic fingerprinting of fresh lymphoma samples used to discriminate between follicular and diffuse large B-cell lymphomas.

    Science.gov (United States)

    Barba, Ignasi; Sanz, Carolina; Barbera, Angels; Tapia, Gustavo; Mate, José-Luis; Garcia-Dorado, David; Ribera, Josep-Maria; Oriol, Albert

    2009-11-01

    To investigate if proton nuclear magnetic resonance ((1)H NMR) spectroscopy-based metabolic profiling was able to differentiate follicular lymphoma (FL) from diffuse large B-cell lymphoma (DLBCL) and to study which metabolites were responsible for the differences. High-resolution (1)H NMR spectra was obtained from fresh samples of lymph node biopsies obtained consecutively at one center (14 FL and 17 DLBCL). Spectra were processed using pattern-recognition methods. Discriminant models were able to differentiate between the two tumor types with a 86% sensitivity and a 76% specificity; the metabolites that most contributed to the discrimination were a relative increase of alanine in the case of DLBCL and a relative increase of taurine in FL. Metabolic models had a significant but weak correlation with Ki67 expression (r(2)=0.42; p=0.002) We have proved that it is possible to differentiate between FL and DLBCL based on their NMR metabolic profiles. This approach may potentially be applicable as a noninvasive tool for diagnostic and treatment follow-up in the clinical setting using conventional magnetic resonance systems.

  4. How much will afforestation of former cropland influence soil C stocks? A synthesis of paired sampling, chronosequence sampling and repeated sampling studies

    Science.gov (United States)

    Vesterdal, Lars; Hansen, K.; Stupak, I.; Don, Axel; Poeplau, C.; Leifeld, Jens; van Wesemael, Bas

    2010-05-01

    The need for documentation of land-use change effects on soil C is high on the agenda in most signatory countries to the Kyoto Protocol. Large land areas in Europe have experienced land-use change from cropland to forest since 1990 by direct afforestation as well as abandonment and regrowth of marginally productive cropland. Soil C dynamics following land-use change remain highly uncertain due to a limited number of available studies and due to influence of interacting factors such as land use history, soil type, and climate. Common approaches for estimation of potential soil C changes following land-use change are i) paired sampling of plots with a long legacy of different land uses, ii) chronosequence studies of land-use change, and lastly iii) repeated sampling of plots subject to changed land use. This paper will synthesize the quantitative effects of cropland afforestation on soil C sequestration based on all three approaches and will report on related work within Cost 639. Paired plots of forest and cropland were used to study the general differences between soil C stocks in the two land uses. At 27 sites in Denmark distributed among different regions and soil types forest floor and mineral soil were sampled in and around soil pits. Soil C stocks were higher in forest than cropland (mean difference 22 Mg C ha-1 to 1 m depth). This difference was caused solely by the presence of a forest floor in forests; mineral soil C stocks were similar (108 vs. 109 Mg C ha-1) in the two land uses regardless of soil type and the soil layers considered. The chronosequence approach was employed in the AFFOREST project for evaluation of C sequestration in biomass and soils following afforestation of cropland. Two oak (Quercus robur) and four Norway spruce (Picea abies) afforestation chronosequences (age range 1 to 90 years) were studied in Denmark, Sweden and the Netherlands. Forest floor and mineral soil (0-25 cm) C contents were as a minimum unchanged and in most cases there

  5. Next generation sensing platforms for extended deployments in large-scale, multidisciplinary, adaptive sampling and observational networks

    Science.gov (United States)

    Cross, J. N.; Meinig, C.; Mordy, C. W.; Lawrence-Slavas, N.; Cokelet, E. D.; Jenkins, R.; Tabisola, H. M.; Stabeno, P. J.

    2016-12-01

    New autonomous sensors have dramatically increased the resolution and accuracy of oceanographic data collection, enabling rapid sampling over extremely fine scales. Innovative new autonomous platofrms like floats, gliders, drones, and crawling moorings leverage the full potential of these new sensors by extending spatiotemporal reach across varied environments. During 2015 and 2016, The Innovative Technology for Arctic Exploration Program at the Pacific Marine Environmental Laboratory tested several new types of fully autonomous platforms with increased speed, durability, and power and payload capacity designed to deliver cutting-edge ecosystem assessment sensors to remote or inaccessible environments. The Expendable Ice-Tracking (EXIT) gloat developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) is moored near bottom during the ice-free season and released on an autonomous timer beneath the ice during the following winter. The float collects a rapid profile during ascent, and continues to collect critical, poorly-accessible under-ice data until melt, when data is transmitted via satellite. The autonomous Oculus sub-surface glider developed by the University of Washington and PMEL has a large power and payload capacity and an enhanced buoyancy engine. This 'coastal truck' is designed for the rapid water column ascent required by optical imaging systems. The Saildrone is a solar and wind powered ocean unmanned surface vessel (USV) developed by Saildrone, Inc. in partnership with PMEL. This large-payload (200 lbs), fast (1-7 kts), durable (46 kts winds) platform was equipped with 15 sensors designed for ecosystem assessment during 2016, including passive and active acoustic systems specially redesigned for autonomous vehicle deployments. The senors deployed on these platforms achieved rigorous accuracy and precision standards. These innovative platforms provide new sampling capabilities and cost efficiencies in high-resolution sensor deployment

  6. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  7. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    International Nuclear Information System (INIS)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-01-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging

  8. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  9. TRAN-STAT: statistics for environmental studies, Number 22. Comparison of soil-sampling techniques for plutonium at Rocky Flats

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bernhardt, D.E.; Hahn, P.B.

    1983-01-01

    A summary of a field soil sampling study conducted around the Rocky Flats Colorado plant in May 1977 is preseted. Several different soil sampling techniques that had been used in the area were applied at four different sites. One objective was to comparethe average 239 - 240 Pu concentration values obtained by the various soil sampling techniques used. There was also interest in determining whether there are differences in the reproducibility of the various techniques and how the techniques compared with the proposed EPA technique of sampling to 1 cm depth. Statistically significant differences in average concentrations between the techniques were found. The differences could be largely related to the differences in sampling depth-the primary physical variable between the techniques. The reproducibility of the techniques was evaluated by comparing coefficients of variation. Differences between coefficients of variation were not statistically significant. Average (median) coefficients ranged from 21 to 42 percent for the five sampling techniques. A laboratory study indicated that various sample treatment and particle sizing techniques could increase the concentration of plutonium in the less than 10 micrometer size fraction by up to a factor of about 4 compared to the 2 mm size fraction

  10. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  11. Neutron activation analysis of archaeological artifacts using the conventional relative method: a realistic approach for analysis of large samples

    International Nuclear Information System (INIS)

    Bedregal, P.S.; Mendoza, A.; Montoya, E.H.; Cohen, I.M.; Universidad Tecnologica Nacional, Buenos Aires; Oscar Baltuano

    2012-01-01

    A new approach for analysis of entire potsherds of archaeological interest by INAA, using the conventional relative method, is described. The analytical method proposed involves, primarily, the preparation of replicates of the original archaeological pottery, with well known chemical composition (standard), destined to be irradiated simultaneously, in a well thermalized external neutron beam of the RP-10 reactor, with the original object (sample). The basic advantage of this proposal is to avoid the need of performing complicated effect corrections when dealing with large samples, due to neutron self shielding, neutron self-thermalization and gamma ray attenuation. In addition, and in contrast with the other methods, the main advantages are the possibility of evaluating the uncertainty of the results and, fundamentally, validating the overall methodology. (author)

  12. Personality traits and eating habits in a large sample of Estonians.

    Science.gov (United States)

    Mõttus, René; Realo, Anu; Allik, Jüri; Deary, Ian J; Esko, Tõnu; Metspalu, Andres

    2012-11-01

    Diet has health consequences, which makes knowing the psychological correlates of dietary habits important. Associations between dietary habits and personality traits were examined in a large sample of Estonians (N = 1,691) aged between 18 and 89 years. Dietary habits were measured using 11 items, which grouped into two factors reflecting (a) health aware and (b) traditional dietary patterns. The health aware diet factor was defined by eating more cereal and dairy products, fish, vegetables and fruits. The traditional diet factor was defined by eating more potatoes, meat and meat products, and bread. Personality was assessed by participants themselves and by people who knew them well. The questionnaire used was the NEO Personality Inventory-3, which measures the Five-Factor Model personality broad traits of Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness, along with six facets for each trait. Gender, age and educational level were controlled for. Higher scores on the health aware diet factor were associated with lower Neuroticism, and higher Extraversion, Openness and Conscientiousness (effect sizes were modest: r = .11 to 0.17 in self-ratings, and r = .08 to 0.11 in informant-ratings, ps < 0.01 or lower). Higher scores on the traditional diet factor were related to lower levels of Openness (r = -0.14 and -0.13, p < .001, self- and informant-ratings, respectively). Endorsement of healthy and avoidance of traditional dietary items are associated with people's personality trait levels, especially higher Openness. The results may inform dietary interventions with respect to possible barriers to diet change.

  13. Subtype-independent near full-length HIV-1 genome sequencing and assembly to be used in large molecular epidemiological studies and clinical management.

    Science.gov (United States)

    Grossmann, Sebastian; Nowak, Piotr; Neogi, Ujjwal

    2015-01-01

    HIV-1 near full-length genome (HIV-NFLG) sequencing from plasma is an attractive multidimensional tool to apply in large-scale population-based molecular epidemiological studies. It also enables genotypic resistance testing (GRT) for all drug target sites allowing effective intervention strategies for control and prevention in high-risk population groups. Thus, the main objective of this study was to develop a simplified subtype-independent, cost- and labour-efficient HIV-NFLG protocol that can be used in clinical management as well as in molecular epidemiological studies. Plasma samples (n=30) were obtained from HIV-1B (n=10), HIV-1C (n=10), CRF01_AE (n=5) and CRF01_AG (n=5) infected individuals with minimum viral load >1120 copies/ml. The amplification was performed with two large amplicons of 5.5 kb and 3.7 kb, sequenced with 17 primers to obtain HIV-NFLG. GRT was validated against ViroSeq™ HIV-1 Genotyping System. After excluding four plasma samples with low-quality RNA, a total of 26 samples were attempted. Among them, NFLG was obtained from 24 (92%) samples with the lowest viral load being 3000 copies/ml. High (>99%) concordance was observed between HIV-NFLG and ViroSeq™ when determining the drug resistance mutations (DRMs). The N384I connection mutation was additionally detected by NFLG in two samples. Our high efficiency subtype-independent HIV-NFLG is a simple and promising approach to be used in large-scale molecular epidemiological studies. It will facilitate the understanding of the HIV-1 pandemic population dynamics and outline effective intervention strategies. Furthermore, it can potentially be applicable in clinical management of drug resistance by evaluating DRMs against all available antiretrovirals in a single assay.

  14. Diversity in the stellar velocity dispersion profiles of a large sample of brightest cluster galaxies z ≤ 0.3

    Science.gov (United States)

    Loubser, S. I.; Hoekstra, H.; Babul, A.; O'Sullivan, E.

    2018-06-01

    We analyse spatially resolved deep optical spectroscopy of brightestcluster galaxies (BCGs) located in 32 massive clusters with redshifts of 0.05 ≤ z ≤ 0.30 to investigate their velocity dispersion profiles. We compare these measurements to those of other massive early-type galaxies, as well as central group galaxies, where relevant. This unique, large sample extends to the most extreme of massive galaxies, spanning MK between -25.7 and -27.8 mag, and host cluster halo mass M500 up to 1.7 × 1015 M⊙. To compare the kinematic properties between brightest group and cluster members, we analyse similar spatially resolved long-slit spectroscopy for 23 nearby brightest group galaxies (BGGs) from the Complete Local-Volume Groups Sample. We find a surprisingly large variety in velocity dispersion slopes for BCGs, with a significantly larger fraction of positive slopes, unique compared to other (non-central) early-type galaxies as well as the majority of the brightest members of the groups. We find that the velocity dispersion slopes of the BCGs and BGGs correlate with the luminosity of the galaxies, and we quantify this correlation. It is not clear whether the full diversity in velocity dispersion slopes that we see is reproduced in simulations.

  15. Gravimetric dust sampling for control purposes and occupational dust sampling.

    CSIR Research Space (South Africa)

    Unsted, AD

    1997-02-01

    Full Text Available Prior to the introduction of gravimetric dust sampling, konimeters had been used for dust sampling, which was largely for control purposes. Whether or not absolute results were achievable was not an issue since relative results were used to evaluate...

  16. Malaria diagnosis from pooled blood samples: comparative analysis of real-time PCR, nested PCR and immunoassay as a platform for the molecular and serological diagnosis of malaria on a large-scale

    Directory of Open Access Journals (Sweden)

    Giselle FMC Lima

    2011-09-01

    Full Text Available Malaria diagnoses has traditionally been made using thick blood smears, but more sensitive and faster techniques are required to process large numbers of samples in clinical and epidemiological studies and in blood donor screening. Here, we evaluated molecular and serological tools to build a screening platform for pooled samples aimed at reducing both the time and the cost of these diagnoses. Positive and negative samples were analysed in individual and pooled experiments using real-time polymerase chain reaction (PCR, nested PCR and an immunochromatographic test. For the individual tests, 46/49 samples were positive by real-time PCR, 46/49 were positive by nested PCR and 32/46 were positive by immunochromatographic test. For the assays performed using pooled samples, 13/15 samples were positive by real-time PCR and nested PCR and 11/15 were positive by immunochromatographic test. These molecular methods demonstrated sensitivity and specificity for both the individual and pooled samples. Due to the advantages of the real-time PCR, such as the fast processing and the closed system, this method should be indicated as the first choice for use in large-scale diagnosis and the nested PCR should be used for species differentiation. However, additional field isolates should be tested to confirm the results achieved using cultured parasites and the serological test should only be adopted as a complementary method for malaria diagnosis.

  17. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    Energy Technology Data Exchange (ETDEWEB)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis.

  18. A Preliminary Study on Sensitivity and Uncertainty Analysis with Statistic Method: Uncertainty Analysis with Cross Section Sampling from Lognormal Distribution

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2013-01-01

    The uncertainty evaluation with statistical method is performed by repetition of transport calculation with sampling the directly perturbed nuclear data. Hence, the reliable uncertainty result can be obtained by analyzing the results of the numerous transport calculations. One of the problems in the uncertainty analysis with the statistical approach is known as that the cross section sampling from the normal (Gaussian) distribution with relatively large standard deviation leads to the sampling error of the cross sections such as the sampling of the negative cross section. Some collection methods are noted; however, the methods can distort the distribution of the sampled cross sections. In this study, a sampling method of the nuclear data is proposed by using lognormal distribution. After that, the criticality calculations with sampled nuclear data are performed and the results are compared with that from the normal distribution which is conventionally used in the previous studies. In this study, the statistical sampling method of the cross section with the lognormal distribution was proposed to increase the sampling accuracy without negative sampling error. Also, a stochastic cross section sampling and writing program was developed. For the sensitivity and uncertainty analysis, the cross section sampling was pursued with the normal and lognormal distribution. The uncertainties, which are caused by covariance of (n,.) cross sections, were evaluated by solving GODIVA problem. The results show that the sampling method with lognormal distribution can efficiently solve the negative sampling problem referred in the previous studies. It is expected that this study will contribute to increase the accuracy of the sampling-based uncertainty analysis

  19. Analysis of IFR samples at ANL-E

    International Nuclear Information System (INIS)

    Bowers, D.L.; Sabau, C.S.

    1993-01-01

    The Analytical Chemistry Laboratory analyzes a variety of samples submitted by the different research groups within IFR. This talk describes the analytical work on samples generated by the Plutonium Electrorefiner, Large Scale Electrorefiner and Waste Treatment Studies. The majority of these samples contain Transuranics and necessitate facilities that safely contain these radioisotopes. Details such as: sample receiving, dissolution techniques, chemical separations, Instrumentation used, reporting of results are discussed. The Importance of Interactions between customer and analytical personnel Is also demonstrated

  20. Studies of diffractive scattering of photons at large momentum transfer and of the VFPS detector at HERA

    International Nuclear Information System (INIS)

    Hreus, Tomas

    2008-09-01

    In this thesis, two studies of the diffractive phenomena in the electron proton collisions with the H1 detector at HERA are presented. The rst is the study of the inclusive elastic diffractive events ep → eXp in the regime of high photon virtuality (Q 2 >few GeV 2 ), with the scattered proton detected by the Very Forward Proton Spectrometer (VFPS). The VFPS detector, designed to measure diffractive scattered protons with high acceptance, has been installed in 2004 to benefit from the HERA II luminosity increase. The selected event sample of an integrated luminosity of 130.2 pb -1 was collected in years 2006-2007. Data sample distributions are compared to the prediction based on the diffractive parton distribution functions, as extracted from the H1 measurement of the diffractive structure function F D(3) 2 at HERA I. After the study of the VFPS efficiency, the VFPS acceptance as a function of x P is estimated and studied in relation to the forward proton beam optics. The second study leads to the cross section measurement of the diffractive scattering of quasi-real photons off protons, γp → γY, with the large momentum transfer, vertical stroke t vertical stroke. The final state photon is separated from the proton dissociation system, Y, by a large rapidity gap and has a large transverse momentum, p T > 2 GeV. Large p T imply the presence of the hard scale t (vertical stroke t vertical stroke ≅ p 2 T ) and allows predictions of the perturbative QCD to be applied. The measurement is based on an integrated luminosity 46.2 pb -1 of data collected in the 1999-2000 running period. Cross sections σ(W) as a function of the incident photon-proton centre of mass energy, W, and dσ/d vertical stroke t vertical stroke are measured in the range Q 2 2 , 175 2 and y P <0.05. The cross section measurements have been compared to predictions of LLA BFKL calculations. (orig.)

  1. Evaluation of Inflammatory Markers in a Large Sample of Obstructive Sleep Apnea Patients without Comorbidities

    Directory of Open Access Journals (Sweden)

    Izolde Bouloukaki

    2017-01-01

    Full Text Available Systemic inflammation is important in obstructive sleep apnea (OSA pathophysiology and its comorbidity. We aimed to assess the levels of inflammatory biomarkers in a large sample of OSA patients and to investigate any correlation between these biomarkers with clinical and polysomnographic (PSG parameters. This was a cross-sectional study in which 2983 patients who had undergone a polysomnography for OSA diagnosis were recruited. Patients with known comorbidities were excluded. Included patients (n=1053 were grouped according to apnea-hypopnea index (AHI as mild, moderate, and severe. Patients with AHI < 5 served as controls. Demographics, PSG data, and levels of high-sensitivity C-reactive protein (hs-CRP, fibrinogen, erythrocyte sedimentation rate (ESR, and uric acid (UA were measured and compared between groups. A significant difference was found between groups in hs-CRP, fibrinogen, and UA. All biomarkers were independently associated with OSA severity and gender (p<0.05. Females had increased levels of hs-CRP, fibrinogen, and ESR (p<0.001 compared to men. In contrast, UA levels were higher in men (p<0.001. Our results suggest that inflammatory markers significantly increase in patients with OSA without known comorbidities and correlate with OSA severity. These findings may have important implications regarding OSA diagnosis, monitoring, treatment, and prognosis. This trial is registered with ClinicalTrials.gov number NCT03070769.

  2. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Science.gov (United States)

    Bányai, Fanni; Zsila, Ágnes; Király, Orsolya; Maraz, Aniko; Elekes, Zsuzsanna; Griffiths, Mark D; Andreassen, Cecilie Schou; Demetrovics, Zsolt

    2017-01-01

    Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic) use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD). Using the Bergen Social Media Addiction Scale (BSMAS) and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  3. Present state of tandem accelerator analysis facility of the National Institute for Environmental Studies. 2. Sample preparation

    Energy Technology Data Exchange (ETDEWEB)

    Kumamoto, Yuichiro; Shibata, Yasuyuki; Kume, Hiroshi; Tanaka, Atsushi; Yoneda, Minoru; Uehiro, Takashi; Morita, Masatoshi [National Inst. for Environmental Studies, Tsukuba, Ibaraki (Japan)

    1996-12-01

    An AMS exclusive apparatus constituted for a center of 5 MV of tandem accelerator (15SDH-2) was introduced to the National Institute for Environmental Studies on September, 1995. The incidental part of the apparatus is constituted by combining negative ion source for solid sample (MC-SNICS) with successive incidental system and combining negative ion source for gas sample (MGF-SNICS) with simultaneous incidental system. In this study, preparation of graphite target for testing of {sup 14}C at a temporary aim of supplying solid sample for MC-SNICS necessary to modulate the apparatus has been conducted. As a result, it was found that most of isotope fractionation on graphite formation from oxalic acid could be neglected. However, as it was said that efficiency of the graphite formation was largely changed with mixing with traced volume of gas such as sulfur dioxide and so on, future presumption is laid at investigation of such isotope fractionation effect on some samples except oxalic acid. In order to conduct contamination evaluation of 14-C at sample preparation, graphite preparation from organic matters and carbon dioxide obtained the fossil fuels without containing 14-C are exchanged in present research. (G.K.)

  4. Developing Sampling Frame for Case Study: Challenges and Conditions

    Science.gov (United States)

    Ishak, Noriah Mohd; Abu Bakar, Abu Yazid

    2014-01-01

    Due to statistical analysis, the issue of random sampling is pertinent to any quantitative study. Unlike quantitative study, the elimination of inferential statistical analysis, allows qualitative researchers to be more creative in dealing with sampling issue. Since results from qualitative study cannot be generalized to the bigger population,…

  5. Soil Characterization by Large Scale Sampling of Soil Mixed with Buried Construction Debris at a Former Uranium Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Nardi, A.J.; Lamantia, L.

    2009-01-01

    Recent soil excavation activities on a site identified the presence of buried uranium contaminated building construction debris. The site previously was the location of a low enriched uranium fuel fabrication facility. This resulted in the collection of excavated materials from the two locations where contaminated subsurface debris was identified. The excavated material was temporarily stored in two piles on the site until a determination could be made as to the appropriate disposition of the material. Characterization of the excavated material was undertaken in a manner that involved the collection of large scale samples of the excavated material in 1 cubic meter Super Sacks. Twenty bags were filled with excavated material that consisted of the mixture of both the construction debris and the associated soil. In order to obtain information on the level of activity associated with the construction debris, ten additional bags were filled with construction debris that had been separated, to the extent possible, from the associated soil. Radiological surveys were conducted of the resulting bags of collected materials and the soil associated with the waste mixture. The 30 large samples, collected as bags, were counted using an In-Situ Object Counting System (ISOCS) unit to determine the average concentration of U-235 present in each bag. The soil fraction was sampled by the collection of 40 samples of soil for analysis in an on-site laboratory. A fraction of these samples were also sent to an off-site laboratory for additional analysis. This project provided the necessary soil characterization information to allow consideration of alternate options for disposition of the material. The identified contaminant was verified to be low enriched uranium. Concentrations of uranium in the waste were found to be lower than the calculated site specific derived concentration guideline levels (DCGLs) but higher than the NRC's screening values. The methods and results are presented

  6. The relationship between the Five-Factor Model personality traits and peptic ulcer disease in a large population-based adult sample.

    Science.gov (United States)

    Realo, Anu; Teras, Andero; Kööts-Ausmees, Liisi; Esko, Tõnu; Metspalu, Andres; Allik, Jüri

    2015-12-01

    The current study examined the relationship between the Five-Factor Model personality traits and physician-confirmed peptic ulcer disease (PUD) diagnosis in a large population-based adult sample, controlling for the relevant behavioral and sociodemographic factors. Personality traits were assessed by participants themselves and by knowledgeable informants using the NEO Personality Inventory-3 (NEO PI-3). When controlling for age, sex, education, and cigarette smoking, only one of the five NEO PI-3 domain scales - higher Neuroticism - and two facet scales - lower A1: Trust and higher C1: Competence - made a small, yet significant contribution (p personality traits that are associated with the diagnosis of PUD at a particular point in time. Further prospective studies with a longitudinal design and multiple assessments would be needed to fully understand if the FFM personality traits serve as risk factors for the development of PUD. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  7. Ecstasy use and depression: a 4-year longitudinal study among an Australian general community sample.

    Science.gov (United States)

    George, Amanda M; Olesen, Sarah; Tait, Robert J

    2013-10-01

    Longitudinal, population-based studies can better assess the relationship of ecstasy use with depression. We examined whether change in ecstasy use was associated with change in depressive symptoms/probable depression over a 4-year period, among a large Australian sample. The Personality and Total Health project is a longitudinal general community study of Australians from Canberra and Queanbeyan. Data from the youngest cohort when aged 24-30 (N = 2, 128) and 4 years later (N = 1, 977) was included. The Goldberg depression scale and the Brief Patient Health Questionnaire measured depressive symptoms and probable depression, respectively. Multilevel growth models also considered demographics, psychosocial characteristics, and other drug use. Ecstasy use was not associated with long-term depressive symptoms or greater odds of depression in multivariate analyses. Users had more self-reported depressive symptoms when using ecstasy compared to not using. However, differences between people who had and had not ever used ecstasy largely accounted for this. Other factors were more important in the prediction of depression. It would be premature to conclude that ecstasy use is not related to the development of long-term depressive symptoms, given the relatively low level of ecstasy and other drug use in this community sample. Results showed that other factors need to be considered when investigating ecstasy use and depression.

  8. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  9. Sampling pig farms at the abattoir in a cross-sectional study − Evaluation of a sampling method

    DEFF Research Database (Denmark)

    Birkegård, Anna Camilla; Hisham Beshara Halasa, Tariq; Toft, Nils

    2017-01-01

    slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2......A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list...... of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However...

  10. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students.

    Science.gov (United States)

    Wang, Song; Zhou, Ming; Chen, Taolin; Yang, Xun; Chen, Guangxiang; Wang, Meiyun; Gong, Qiyong

    2017-04-18

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphometry (VBM) approach. The whole-brain regression analyses showed that higher academic performance was related to greater regional gray matter density (rGMD) of the left dorsolateral prefrontal cortex (DLPFC), which is considered a neural center at the intersection of cognitive and non-cognitive functions. Furthermore, mediation analyses suggested that general intelligence partially mediated the impact of the left DLPFC density on academic performance. These results persisted even after adjusting for the effect of family socioeconomic status (SES). In short, our findings reveal a potential neuroanatomical marker for academic performance and highlight the role of general intelligence in explaining the relationship between brain structure and academic performance.

  11. Relationships between anhedonia, suicidal ideation and suicide attempts in a large sample of physicians.

    Directory of Open Access Journals (Sweden)

    Gwenolé Loas

    Full Text Available The relationships between anhedonia and suicidal ideation or suicide attempts were explored in a large sample of physicians using the interpersonal psychological theory of suicide. We tested two hypotheses: firstly, that there is a significant relationship between anhedonia and suicidality and, secondly, that anhedonia could mediate the relationships between suicidal ideation or suicide attempts and thwarted belongingness or perceived burdensomeness.In a cross-sectional study, 557 physicians filled out several questionnaires measuring suicide risk, depression, using the abridged version of the Beck Depression Inventory (BDI-13, and demographic and job-related information. Ratings of anhedonia, perceived burdensomeness and thwarted belongingness were then extracted from the BDI-13 and the other questionnaires.Significant relationships were found between anhedonia and suicidal ideation or suicide attempts, even when significant variables or covariates were taken into account and, in particular, depressive symptoms. Mediation analyses showed significant partial or complete mediations, where anhedonia mediated the relationships between suicidal ideation (lifetime or recent and perceived burdensomeness or thwarted belongingness. For suicide attempts, complete mediation was found only between anhedonia and thwarted belongingness. When the different components of anhedonia were taken into account, dissatisfaction-not the loss of interest or work inhibition-had significant relationships with suicidal ideation, whereas work inhibition had significant relationships with suicide attempts.Anhedonia and its component of dissatisfaction could be a risk factor for suicidal ideation and could mediate the relationship between suicidal ideation and perceived burdensomeness or thwarted belongingness in physicians. Dissatisfaction, in particular in the workplace, may be explored as a strong predictor of suicidal ideation in physicians.

  12. Statistics and sampling in transuranic studies

    International Nuclear Information System (INIS)

    Eberhardt, L.L.; Gilbert, R.O.

    1980-01-01

    The existing data on transuranics in the environment exhibit a remarkably high variability from sample to sample (coefficients of variation of 100% or greater). This chapter stresses the necessity of adequate sample size and suggests various ways to increase sampling efficiency. Objectives in sampling are regarded as being of great importance in making decisions as to sampling methodology. Four different classes of sampling methods are described: (1) descriptive sampling, (2) sampling for spatial pattern, (3) analytical sampling, and (4) sampling for modeling. A number of research needs are identified in the various sampling categories along with several problems that appear to be common to two or more such areas

  13. Proteomic Biomarker Discovery in 1000 Human Plasma Samples with Mass Spectrometry.

    Science.gov (United States)

    Cominetti, Ornella; Núñez Galindo, Antonio; Corthésy, John; Oller Moreno, Sergio; Irincheeva, Irina; Valsesia, Armand; Astrup, Arne; Saris, Wim H M; Hager, Jörg; Kussmann, Martin; Dayon, Loïc

    2016-02-05

    The overall impact of proteomics on clinical research and its translation has lagged behind expectations. One recognized caveat is the limited size (subject numbers) of (pre)clinical studies performed at the discovery stage, the findings of which fail to be replicated in larger verification/validation trials. Compromised study designs and insufficient statistical power are consequences of the to-date still limited capacity of mass spectrometry (MS)-based workflows to handle large numbers of samples in a realistic time frame, while delivering comprehensive proteome coverages. We developed a highly automated proteomic biomarker discovery workflow. Herein, we have applied this approach to analyze 1000 plasma samples from the multicentered human dietary intervention study "DiOGenes". Study design, sample randomization, tracking, and logistics were the foundations of our large-scale study. We checked the quality of the MS data and provided descriptive statistics. The data set was interrogated for proteins with most stable expression levels in that set of plasma samples. We evaluated standard clinical variables that typically impact forthcoming results and assessed body mass index-associated and gender-specific proteins at two time points. We demonstrate that analyzing a large number of human plasma samples for biomarker discovery with MS using isobaric tagging is feasible, providing robust and consistent biological results.

  14. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Directory of Open Access Journals (Sweden)

    Fanni Bányai

    Full Text Available Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD. Using the Bergen Social Media Addiction Scale (BSMAS and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  15. Proteomic biomarker discovery in 1000 human plasma samples with mass spectrometry

    DEFF Research Database (Denmark)

    Cominetti, Ornella; Núñez Galindo, Antonio; Corthésy, John

    2016-01-01

    automated proteomic biomarker discovery workflow. Herein, we have applied this approach to analyze 1000 plasma samples from the multicentered human dietary intervention study "DiOGenes". Study design, sample randomization, tracking, and logistics were the foundations of our large-scale study. We checked...

  16. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  17. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Polymeric ionic liquid-based portable tip microextraction device for on-site sample preparation of water samples.

    Science.gov (United States)

    Chen, Lei; Pei, Junxian; Huang, Xiaojia; Lu, Min

    2018-06-05

    On-site sample preparation is highly desired because it avoids the transportation of large-volume samples and ensures the accuracy of the analytical results. In this work, a portable prototype of tip microextraction device (TMD) was designed and developed for on-site sample pretreatment. The assembly procedure of TMD is quite simple. Firstly, polymeric ionic liquid (PIL)-based adsorbent was in-situ prepared in a pipette tip. After that, the tip was connected with a syringe which was driven by a bidirectional motor. The flow rates in adsorption and desorption steps were controlled accurately by the motor. To evaluate the practicability of the developed device, the TMD was used to on-site sample preparation of waters and combined with high-performance liquid chromatography with diode array detection to measure trace estrogens in water samples. Under the most favorable conditions, the limits of detection (LODs, S/N = 3) for the target analytes were in the range of 4.9-22 ng/L, with good coefficients of determination. Confirmatory study well evidences that the extraction performance of TMD is comparable to that of the traditional laboratory solid-phase extraction process, but the proposed TMD is more simple and convenient. At the same time, the TMD avoids complicated sampling and transferring steps of large-volume water samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Feasibility and accuracy evaluation of three human papillomavirus assays for FTA card-based sampling: a pilot study in cervical cancer screening

    OpenAIRE

    Wang, Shao-Ming; Hu, Shang-Ying; Chen, Wen; Chen, Feng; Zhao, Fang-Hui; He, Wei; Ma, Xin-Ming; Zhang, Yu-Qing; Wang, Jian; Sivasubramaniam, Priya; Qiao, You-Lin

    2015-01-01

    Background Liquid-state specimen carriers are inadequate for sample transportation in large-scale screening projects in low-resource settings, which necessitates the exploration of novel non-hazardous solid-state alternatives. Studies investigating the feasibility and accuracy of a solid-state human papillomavirus (HPV) sampling medium in combination with different down-stream HPV DNA assays for cervical cancer screening are needed. Methods We collected two cervical specimens from 396 women, ...

  20. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Science.gov (United States)

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be

  1. Study of Efficiency Calibrations of HPGe Detectors for Radioactivity Measurements of Environmental Samples

    International Nuclear Information System (INIS)

    Harb, S.; Salahel Din, K.; Abbady, A.

    2009-01-01

    In this paper, we describe a method of calibrating of efficiency of a HPGe gamma-ray spectrometry of bulk environmental samples (Tea, crops, water, and soil) is a significant part of the environmental radioactivity measurements. Here we will discuss the full energy peak efficiency (FEPE) of three HPGe detectors it as a consequence, it is essential that the efficiency is determined for each set-up employed. Besides to take full advantage at gamma-ray spectrometry, a set of efficiency at several energies which covers the wide the range in energy, the large the number of radionuclides whose concentration can be determined to measure the main natural gamma-ray emitters, the efficiency should be known at least from 46.54 keV ( 210 Pb) to 1836 keV ( 88 Y). Radioactive sources were prepared from two different standards, a first mixed standard QC Y 40 containing 210 Pb, 241 Am, 109 Cd, and Co 57 , and the second QC Y 48 containing 241 Am, 109 Cd, 57 Co, 139 Ce, 113 Sn, 85 Sr, 137 Cs, 88 Y, and 60 Co is necessary in order to calculate the activity of the different radionuclides contained in a sample. In this work, we will study the efficiency calibration as a function of different parameters as:- Energy of gamma ray from 46.54 keV ( 210 Pb) to 1836 keV ( 88 Y), three different detectors A, B, and C, geometry of containers (point source, marinelli beaker, and cylindrical bottle 1 L), height of standard soil samples in bottle 250 ml, and density of standard environmental samples. These standard environmental sample must be measured before added standard solution because we will use the same environmental samples in order to consider the self absorption especially and composition in the case of volume samples.

  2. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    Science.gov (United States)

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  3. An exploratory study of the heterogeneity of the jealousy phenomenon and its associations with affective temperaments and psychopathological dimensions in a large Brazilian sample.

    Science.gov (United States)

    Lima, Amanda B; Köhler, Cristiano A; Stubbs, Brendon; Quevedo, João; Hyphantis, Thomas N; Koyanagi, Ai; Marazziti, Donatella; Soares, Jair C; Vieta, Eduard; Carvalho, André F

    2017-04-01

    Jealousy is a heterogenous emotion on a spectrum from normality to psychopathology. The relationship between different jealousy subtypes/dimensions and affective temperaments remain unknown. In addition, few large surveys have investigated the associations between jealousy subtypes and psychopathological dimensions. A Brazilian Portuguese version of the "Questionario della Gelosia" (QUEGE) was developed. We obtained data from an anonymous web-based research platform. Socio-demographic data was obtained and participants answered the QUEGE, the TEMPS-Rio de Janeiro, and the Symptom Checklist-90-Revised (SCL-90-R). 2042 participants (29% men, 71% female, mean age+SD: 28.9±8.8 years), took part in this survey. Confirmatory factor analysis provided a five-factor model for the QUEGE with self-esteem, paranoia, interpersonal sensitivity, fear of being abandoned, and obsessive dimensions. The anxious, irritable, cyclothymic, and depressive temperaments were independently associated with jealousy dimensions, whereas the hyperthymic temperament was associated with lower scores on the self-esteem jealousy dimension (N=2042, PJealousy subtypes were dissimilarly associated with SCL-90R psychopathological dimensions, whereas the 'obsessive' jealousy dimension was not significantly associated with SCL-90R dimension scores. We found no independent influence of gender across any jealousy dimension. A convenience web-based sample was employed. Cross-sectional design precludes the establishment of causal inferences. Our data indicate that a five-factor solution may provide the best-fit model for the QUEGE. Different jealousy subtypes were independently associated with affective temperaments and psychopathological dimensions. These associations reported herein should be confirmed in prospective studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A propidium monoazide–quantitative PCR method for the detection and quantification of viable Enterococcus faecalis in large-volume samples of marine waters

    KAUST Repository

    Salam, Khaled W.; El-Fadel, Mutasem E.; Barbour, Elie K.; Saikaly, Pascal

    2014-01-01

    The development of rapid detection assays of cell viability is essential for monitoring the microbiological quality of water systems. Coupling propidium monoazide with quantitative PCR (PMA-qPCR) has been successfully applied in different studies for the detection and quantification of viable cells in small-volume samples (0.25-1.00 mL), but it has not been evaluated sufficiently in marine environments or in large-volume samples. In this study, we successfully integrated blue light-emitting diodes for photoactivating PMA and membrane filtration into the PMA-qPCR assay for the rapid detection and quantification of viable Enterococcus faecalis cells in 10-mL samples of marine waters. The assay was optimized in phosphate-buffered saline and seawater, reducing the qPCR signal of heat-killed E. faecalis cells by 4 log10 and 3 log10 units, respectively. Results suggest that high total dissolved solid concentration (32 g/L) in seawater can reduce PMA activity. Optimal PMA-qPCR standard curves with a 6-log dynamic range and detection limit of 102 cells/mL were generated for quantifying viable E. faecalis cells in marine waters. The developed assay was compared with the standard membrane filter (MF) method by quantifying viable E. faecalis cells in seawater samples exposed to solar radiation. The results of the developed PMA-qPCR assay did not match that of the standard MF method. This difference in the results reflects the different physiological states of E. faecalis cells in seawater. In conclusion, the developed assay is a rapid (∼5 h) method for the quantification of viable E. faecalis cells in marine recreational waters, which should be further improved and tested in different seawater settings. © 2014 Springer-Verlag Berlin Heidelberg.

  5. A propidium monoazide–quantitative PCR method for the detection and quantification of viable Enterococcus faecalis in large-volume samples of marine waters

    KAUST Repository

    Salam, Khaled W.

    2014-08-23

    The development of rapid detection assays of cell viability is essential for monitoring the microbiological quality of water systems. Coupling propidium monoazide with quantitative PCR (PMA-qPCR) has been successfully applied in different studies for the detection and quantification of viable cells in small-volume samples (0.25-1.00 mL), but it has not been evaluated sufficiently in marine environments or in large-volume samples. In this study, we successfully integrated blue light-emitting diodes for photoactivating PMA and membrane filtration into the PMA-qPCR assay for the rapid detection and quantification of viable Enterococcus faecalis cells in 10-mL samples of marine waters. The assay was optimized in phosphate-buffered saline and seawater, reducing the qPCR signal of heat-killed E. faecalis cells by 4 log10 and 3 log10 units, respectively. Results suggest that high total dissolved solid concentration (32 g/L) in seawater can reduce PMA activity. Optimal PMA-qPCR standard curves with a 6-log dynamic range and detection limit of 102 cells/mL were generated for quantifying viable E. faecalis cells in marine waters. The developed assay was compared with the standard membrane filter (MF) method by quantifying viable E. faecalis cells in seawater samples exposed to solar radiation. The results of the developed PMA-qPCR assay did not match that of the standard MF method. This difference in the results reflects the different physiological states of E. faecalis cells in seawater. In conclusion, the developed assay is a rapid (∼5 h) method for the quantification of viable E. faecalis cells in marine recreational waters, which should be further improved and tested in different seawater settings. © 2014 Springer-Verlag Berlin Heidelberg.

  6. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    Shimakawa, Yoshio; Nibe, Nobuaki; Hori, Toru

    2002-05-01

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  7. Xenosurveillance reflects traditional sampling techniques for the identification of human pathogens: A comparative study in West Africa.

    Directory of Open Access Journals (Sweden)

    Joseph R Fauver

    2018-03-01

    Full Text Available Novel surveillance strategies are needed to detect the rapid and continuous emergence of infectious disease agents. Ideally, new sampling strategies should be simple to implement, technologically uncomplicated, and applicable to areas where emergence events are known to occur. To this end, xenosurveillance is a technique that makes use of blood collected by hematophagous arthropods to monitor and identify vertebrate pathogens. Mosquitoes are largely ubiquitous animals that often exist in sizable populations. As well, many domestic or peridomestic species of mosquitoes will preferentially take blood-meals from humans, making them a unique and largely untapped reservoir to collect human blood.We sought to take advantage of this phenomenon by systematically collecting blood-fed mosquitoes during a field trail in Northern Liberia to determine whether pathogen sequences from blood engorged mosquitoes accurately mirror those obtained directly from humans. Specifically, blood was collected from humans via finger-stick and by aspirating bloodfed mosquitoes from the inside of houses. Shotgun metagenomic sequencing of RNA and DNA derived from these specimens was performed to detect pathogen sequences. Samples obtained from xenosurveillance and from finger-stick blood collection produced a similar number and quality of reads aligning to two human viruses, GB virus C and hepatitis B virus.This study represents the first systematic comparison between xenosurveillance and more traditional sampling methodologies, while also demonstrating the viability of xenosurveillance as a tool to sample human blood for circulating pathogens.

  8. Distribution and assemblages of large branchiopods (Crustacea: Branchiopoda of northern Western Ghats, India

    Directory of Open Access Journals (Sweden)

    Sameer M. Padhye

    2015-01-01

    Full Text Available The present study is a report on the distribution and assemblages of large branchiopods from the Western Ghats of Maharashtra. Different types of water bodies were sampled including pools on lateritic outcrops. Eight species of large branchiopods were found in 72 samples collected over a period of 4 years. We found 7 large branchiopod species in rock pools, while the cyclestheriid Cyclestheria hislopi was observed only in rivers and water reservoirs. In twenty-five percent of the samples multiple species co-occurred with a maximum of 4 species in a single sample. Streptocephalus dichotomus was the most commonly observed species while Streptocephalus sahyadriensis was noted only in rock pools. Altitude and aquatic vegetation were identified as important factors for the distribution of large branchiopods in the studied area. Triops granarius was the species most commonly found to be co-occurring with other species, followed by S. sahyadriensis. Cyclestheria hislopi and Eulimnadia indocylindrova always occurred alone. 

  9. New models for large prospective studies: is there a risk of throwing out the baby with the bathwater?

    Science.gov (United States)

    Bracken, Michael B; Baker, Dean; Cauley, Jane A; Chambers, Christina; Culhane, Jennifer; Dabelea, Dana; Dearborn, Dorr; Drews-Botsch, Carolyn D; Dudley, Donald J; Durkin, Maureen; Entwisle, Barbara; Flick, Louise; Hale, Daniel; Holl, Jane; Hovell, Melbourne; Hudak, Mark; Paneth, Nigel; Specker, Bonny; Wilhelm, Mari; Wyatt, Sharon

    2013-02-15

    Manolio et al. (Am J Epidemiol. 2012;175:859-866) proposed that large cohort studies adopt novel models using "temporary assessment centers" to enroll up to a million participants to answer research questions about rare diseases and "harmonize" clinical endpoints collected from administrative records. Extreme selection bias, we are told, will not harm internal validity, and "process expertise to maximize efficiency of high-throughput operations is as important as scientific rigor" (p. 861). In this article, we describe serious deficiencies in this model as applied to the United States. Key points include: 1) the need for more, not less, specification of disease endpoints; 2) the limited utility of data collected from existing administrative and clinical databases; and 3) the value of university-based centers in providing scientific expertise and achieving high recruitment and retention rates through community and healthcare provider engagement. Careful definition of sampling frames and high response rates are crucial to avoid bias and ensure inclusion of important subpopulations, especially the medically underserved. Prospective hypotheses are essential to refine study design, determine sample size, develop pertinent data collection protocols, and achieve alliances with participants and communities. It is premature to reject the strengths of large national cohort studies in favor of a new model for which evidence of efficiency is insufficient.

  10. Religion and the Unmaking of Prejudice toward Muslims: Evidence from a Large National Sample

    Science.gov (United States)

    Shaver, John H.; Troughton, Geoffrey; Sibley, Chris G.; Bulbulia, Joseph A.

    2016-01-01

    In the West, anti-Muslim sentiments are widespread. It has been theorized that inter-religious tensions fuel anti-Muslim prejudice, yet previous attempts to isolate sectarian motives have been inconclusive. Factors contributing to ambiguous results are: (1) failures to assess and adjust for multi-level denomination effects; (2) inattention to demographic covariates; (3) inadequate methods for comparing anti-Muslim prejudice relative to other minority group prejudices; and (4) ad hoc theories for the mechanisms that underpin prejudice and tolerance. Here we investigate anti-Muslim prejudice using a large national sample of non-Muslim New Zealanders (N = 13,955) who responded to the 2013 New Zealand Attitudes and Values Study. We address previous shortcomings by: (1) building Bayesian multivariate, multi-level regression models with denominations modeled as random effects; (2) including high-resolution demographic information that adjusts for factors known to influence prejudice; (3) simultaneously evaluating the relative strength of anti-Muslim prejudice by comparing it to anti-Arab prejudice and anti-immigrant prejudice within the same statistical model; and (4) testing predictions derived from the Evolutionary Lag Theory of religious prejudice and tolerance. This theory predicts that in countries such as New Zealand, with historically low levels of conflict, religion will tend to increase tolerance generally, and extend to minority religious groups. Results show that anti-Muslim and anti-Arab sentiments are confounded, widespread, and substantially higher than anti-immigrant sentiments. In support of the theory, the intensity of religious commitments was associated with a general increase in tolerance toward minority groups, including a poorly tolerated religious minority group: Muslims. Results clarify religion’s power to enhance tolerance in peaceful societies that are nevertheless afflicted by prejudice. PMID:26959976

  11. Religion and the Unmaking of Prejudice toward Muslims: Evidence from a Large National Sample.

    Science.gov (United States)

    Shaver, John H; Troughton, Geoffrey; Sibley, Chris G; Bulbulia, Joseph A

    2016-01-01

    In the West, anti-Muslim sentiments are widespread. It has been theorized that inter-religious tensions fuel anti-Muslim prejudice, yet previous attempts to isolate sectarian motives have been inconclusive. Factors contributing to ambiguous results are: (1) failures to assess and adjust for multi-level denomination effects; (2) inattention to demographic covariates; (3) inadequate methods for comparing anti-Muslim prejudice relative to other minority group prejudices; and (4) ad hoc theories for the mechanisms that underpin prejudice and tolerance. Here we investigate anti-Muslim prejudice using a large national sample of non-Muslim New Zealanders (N = 13,955) who responded to the 2013 New Zealand Attitudes and Values Study. We address previous shortcomings by: (1) building Bayesian multivariate, multi-level regression models with denominations modeled as random effects; (2) including high-resolution demographic information that adjusts for factors known to influence prejudice; (3) simultaneously evaluating the relative strength of anti-Muslim prejudice by comparing it to anti-Arab prejudice and anti-immigrant prejudice within the same statistical model; and (4) testing predictions derived from the Evolutionary Lag Theory of religious prejudice and tolerance. This theory predicts that in countries such as New Zealand, with historically low levels of conflict, religion will tend to increase tolerance generally, and extend to minority religious groups. Results show that anti-Muslim and anti-Arab sentiments are confounded, widespread, and substantially higher than anti-immigrant sentiments. In support of the theory, the intensity of religious commitments was associated with a general increase in tolerance toward minority groups, including a poorly tolerated religious minority group: Muslims. Results clarify religion's power to enhance tolerance in peaceful societies that are nevertheless afflicted by prejudice.

  12. Relationships between anhedonia, suicidal ideation and suicide attempts in a large sample of physicians

    Science.gov (United States)

    Lefebvre, Guillaume; Rotsaert, Marianne; Englert, Yvon

    2018-01-01

    Background The relationships between anhedonia and suicidal ideation or suicide attempts were explored in a large sample of physicians using the interpersonal psychological theory of suicide. We tested two hypotheses: firstly, that there is a significant relationship between anhedonia and suicidality and, secondly, that anhedonia could mediate the relationships between suicidal ideation or suicide attempts and thwarted belongingness or perceived burdensomeness. Methods In a cross-sectional study, 557 physicians filled out several questionnaires measuring suicide risk, depression, using the abridged version of the Beck Depression Inventory (BDI-13), and demographic and job-related information. Ratings of anhedonia, perceived burdensomeness and thwarted belongingness were then extracted from the BDI-13 and the other questionnaires. Results Significant relationships were found between anhedonia and suicidal ideation or suicide attempts, even when significant variables or covariates were taken into account and, in particular, depressive symptoms. Mediation analyses showed significant partial or complete mediations, where anhedonia mediated the relationships between suicidal ideation (lifetime or recent) and perceived burdensomeness or thwarted belongingness. For suicide attempts, complete mediation was found only between anhedonia and thwarted belongingness. When the different components of anhedonia were taken into account, dissatisfaction—not the loss of interest or work inhibition—had significant relationships with suicidal ideation, whereas work inhibition had significant relationships with suicide attempts. Conclusions Anhedonia and its component of dissatisfaction could be a risk factor for suicidal ideation and could mediate the relationship between suicidal ideation and perceived burdensomeness or thwarted belongingness in physicians. Dissatisfaction, in particular in the workplace, may be explored as a strong predictor of suicidal ideation in physicians

  13. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    Directory of Open Access Journals (Sweden)

    Shanyou Zhu

    2014-01-01

    Full Text Available Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  14. A large-scale investigation of the quality of groundwater in six major districts of Central India during the 2010-2011 sampling campaign.

    Science.gov (United States)

    Khare, Peeyush

    2017-09-01

    This paper investigates the groundwater quality in six major districts of Madhya Pradesh in central India, namely, Balaghat, Chhindwara, Dhar, Jhabua, Mandla, and Seoni during the 2010-2011 sampling campaign, and discusses improvements made in the supplied water quality between the years 2011 and 2017. Groundwater is the main source of water for a combined rural population of over 7 million in these districts. Its contamination could have a huge impact on public health. We analyzed the data collected from a large-scale water sampling campaign carried out by the Public Health Engineering Department (PHED), Government of Madhya Pradesh between 2010 and 2011 during which all rural tube wells and dug wells were sampled in these six districts. Eight hundred thirty-one dug wells and 47,606 tube wells were sampled in total and were analyzed for turbidity, hardness, iron, nitrate, fluoride, chloride, and sulfate ion concentrations. Our study found water in 21 out of the 228 dug wells in Chhindwara district unfit for drinking due to fluoride contamination while all dug wells in Balaghat had fluoride within the permissible limit. Twenty-six of the 56 dug wells and 4825 of the 9390 tube wells in Dhar district exceeded the permissible limit for nitrate while 100% dug wells in Balaghat, Seoni, and Chhindwara had low levels of nitrate. Twenty-four of the 228 dug wells and 1669 of 6790 tube wells in Chhindwara had high iron concentration. The median pH value in both dug wells and tube wells varied between 6 and 8 in all six districts. Still, a significant number of tube wells exceeded a pH of 8.5 especially in Mandla and Seoni districts. In conclusion, this study shows that parts of inhabited rural Madhya Pradesh were potentially exposed to contaminated subsurface water during 2010-2011. The analysis has been correlated with rural health survey results wherever available to estimate the visible impact. We next highlight that the quality of drinking water has enormously improved

  15. Measurement and structural invariance of the US version of the Birth Satisfaction Scale-Revised (BSS-R) in a large sample.

    Science.gov (United States)

    Martin, Colin R; Hollins Martin, Caroline J; Burduli, Ekaterina; Barbosa-Leiker, Celestina; Donovan-Batson, Colleen; Fleming, Susan E

    2017-08-01

    The 10-item Birth Satisfaction Scale-Revised (BSS-R) is being increasingly used internationally. The use of the measure and the concept has gathered traction in the United States following the development of a US version of the tool. A limitation of previous studies of the measurement characteristics of the BSS-R is modest sample size. Unplanned pregnancy is recognised as being associated with a range of negative birth outcomes, but the relationship to birth satisfaction has received little attention, despite the importance of birth satisfaction to a range of postnatal outcomes. The current investigation sought to evaluate the measurement characteristics of the BSS-R in a large postpartum sample. Multiple Groups Confirmatory Factor Analysis (MGCFA) was used to evaluate a series of measurement and structural models of the BSS-R to evaluate fundamental invariance characteristics using planned/unplanned pregnancy status to differentiate groups. Complete data from N=2116 women revealed that the US version of the BSS-R offers an excellent fit to data and demonstrates full measurement and structural invariance. Little difference was observed between women on the basis of planned/unplanned pregnancy stratification on measures of birth satisfaction. The established relationship between unplanned pregnancy and negative perinatal outcomes was not found to extend to birth satisfaction in the current study. The BSS-R demonstrated exemplary measurement and structural invariance characteristics. The current study strongly supports the use of the US version of the BSS-R to compare birth satisfaction across different groups of women with theoretical and measurement confidence. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  16. Studies of diffractive scattering of photons at large momentum transfer and of the VFPS detector at HERA

    Energy Technology Data Exchange (ETDEWEB)

    Hreus, Tomas

    2008-11-15

    In this thesis, two studies of the diffractive phenomena in the electron proton collisions with the H1 detector at HERA are presented. The rst is the study of the inclusive elastic diffractive events ep {yields} eXp in the regime of high photon virtuality (Q{sup 2}>few GeV{sup 2}), with the scattered proton detected by the Very Forward Proton Spectrometer (VFPS). The VFPS detector, designed to measure diffractive scattered protons with high acceptance, has been installed in 2004 to benefit from the HERA II luminosity increase. The selected event sample of an integrated luminosity of 130.2 pb{sup -1} was collected in years 2006-2007. Data sample distributions are compared to the prediction based on the diffractive parton distribution functions, as extracted from the H1 measurement of the diffractive structure function F{sup D(3)}{sub 2} at HERA I. After the study of the VFPS efficiency, the VFPS acceptance as a function of x{sub P} is estimated and studied in relation to the forward proton beam optics. The second study leads to the cross section measurement of the diffractive scattering of quasi-real photons off protons, {gamma}p {yields} {gamma}Y, with the large momentum transfer, vertical stroke t vertical stroke. The final state photon is separated from the proton dissociation system, Y, by a large rapidity gap and has a large transverse momentum, p{sub T} > 2 GeV. Large p{sub T} imply the presence of the hard scale t (vertical stroke t vertical stroke {approx_equal} p{sup 2}{sub T}) and allows predictions of the perturbative QCD to be applied. The measurement is based on an integrated luminosity 46.2 pb{sup -1} of data collected in the 1999-2000 running period. Cross sections {sigma}(W) as a function of the incident photon-proton centre of mass energy, W, and d{sigma}/d vertical stroke t vertical stroke are measured in the range Q{sup 2}<0.01 GeV{sup 2}, 175

  17. Exploring Anxiety Symptoms in a Large-Scale Twin Study of Children with Autism Spectrum Disorders, Their Co-Twins and Controls

    Science.gov (United States)

    Hallett, Victoria; Ronald, Angelica; Colvert, Emma; Ames, Catherine; Woodhouse, Emma; Lietz, Stephanie; Garnett, Tracy; Gillan, Nicola; Rijsdijk, Fruhling; Scahill, Lawrence; Bolton, Patrick; Happé, Francesca

    2013-01-01

    Background: Although many children with autism spectrum disorders (ASDs) experience difficulties with anxiety, the manifestation of these difficulties remains unresolved. The current study assessed anxiety in a large population-based twin sample, aged 10-15 years. Phenotypic analyses were used to explore anxiety symptoms in children with ASDs,…

  18. Prevalence of suicidal behaviour and associated factors in a large sample of Chinese adolescents.

    Science.gov (United States)

    Liu, X C; Chen, H; Liu, Z Z; Wang, J Y; Jia, C X

    2017-10-12

    Suicidal behaviour is prevalent among adolescents and is a significant predictor of future suicide attempts (SAs) and suicide death. Data on the prevalence and epidemiological characteristics of suicidal behaviour in Chinese adolescents are limited. This study was aimed to examine the prevalence, characteristics and risk factors of suicidal behaviour, including suicidal thought (ST), suicide plan (SP) and SA, in a large sample of Chinese adolescents. This report represents the first wave data of an ongoing longitudinal study, Shandong Adolescent Behavior and Health Cohort. Participants included 11 831 adolescent students from three counties of Shandong, China. The mean age of participants was 15.0 (s.d. = 1.5) and 51% were boys. In November-December 2015, participants completed a structured adolescent health questionnaire, including ST, SP and SA, characteristics of most recent SA, demographics, substance use, hopelessness, impulsivity and internalising and externalising behavioural problems. The lifetime and last-year prevalence rates were 17.6 and 10.7% for ST in males, 23.5 and 14.7% for ST in females, 8.9 and 2.9% for SP in males, 10.7 and 3.8% for SP in females, 3.4 and 1.3% for SA in males, and 4.6 and 1.8% for SA in females, respectively. The mean age of first SA was 12-13 years. Stabbing/cutting was the most common method to attempt suicide. Approximately 24% of male attempters and 16% of female attempters were medically treated. More than 70% of attempters had no preparatory action. Female gender, smoking, drinking, internalising and externalising problems, hopelessness, suicidal history of friends and acquaintances, poor family economic status and poor parental relationship were all significantly associated with increased risk of suicidal behaviour. Suicidal behaviour in Chinese adolescents is prevalent but less than that previously reported in Western peers. While females are more likely to attempt suicide, males are more likely to use lethal methods

  19. Large-scale prospective T cell function assays in shipped, unfrozen blood samples

    DEFF Research Database (Denmark)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J

    2014-01-01

    , for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within...... cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities...... North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T...

  20. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  1. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  2. Dedicated Tool for Irradiation and Electrical Measurement of Large Surface Samples on the Beamline of a 2.5 Mev Pelletron Electron Accelerator: Application to Solar Cells

    OpenAIRE

    Lefèvre Jérémie; Le Houedec Patrice; Losco Jérôme; Cavani Olivier; Boizot Bruno

    2017-01-01

    We designed a tool allowing irradiation of large samples over a surface of A5 size dimension by means of a 2.5 MeV Pelletron electron accelerator. in situ electrical measurements (I-V, conductivity, etc.) can also be performed, in the dark or under illumination, to study radiation effects in materials. Irradiations and electrical measurements are achievable over a temperature range from 100 K to 300 K. The setup was initially developed to test real-size triple junction solar cells at low t...

  3. Sampling Key Populations for HIV Surveillance: Results From Eight Cross-Sectional Studies Using Respondent-Driven Sampling and Venue-Based Snowball Sampling.

    Science.gov (United States)

    Rao, Amrita; Stahlman, Shauna; Hargreaves, James; Weir, Sharon; Edwards, Jessie; Rice, Brian; Kochelani, Duncan; Mavimbela, Mpumelelo; Baral, Stefan

    2017-10-20

    In using regularly collected or existing surveillance data to characterize engagement in human immunodeficiency virus (HIV) services among marginalized populations, differences in sampling methods may produce different pictures of the target population and may therefore result in different priorities for response. The objective of this study was to use existing data to evaluate the sample distribution of eight studies of female sex workers (FSW) and men who have sex with men (MSM), who were recruited using different sampling approaches in two locations within Sub-Saharan Africa: Manzini, Swaziland and Yaoundé, Cameroon. MSM and FSW participants were recruited using either respondent-driven sampling (RDS) or venue-based snowball sampling. Recruitment took place between 2011 and 2016. Participants at each study site were administered a face-to-face survey to assess sociodemographics, along with the prevalence of self-reported HIV status, frequency of HIV testing, stigma, and other HIV-related characteristics. Crude and RDS-adjusted prevalence estimates were calculated. Crude prevalence estimates from the venue-based snowball samples were compared with the overlap of the RDS-adjusted prevalence estimates, between both FSW and MSM in Cameroon and Swaziland. RDS samples tended to be younger (MSM aged 18-21 years in Swaziland: 47.6% [139/310] in RDS vs 24.3% [42/173] in Snowball, in Cameroon: 47.9% [99/306] in RDS vs 20.1% [52/259] in Snowball; FSW aged 18-21 years in Swaziland 42.5% [82/325] in RDS vs 8.0% [20/249] in Snowball; in Cameroon 15.6% [75/576] in RDS vs 8.1% [25/306] in Snowball). They were less educated (MSM: primary school completed or less in Swaziland 42.6% [109/310] in RDS vs 4.0% [7/173] in Snowball, in Cameroon 46.2% [138/306] in RDS vs 14.3% [37/259] in Snowball; FSW: primary school completed or less in Swaziland 86.6% [281/325] in RDS vs 23.9% [59/247] in Snowball, in Cameroon 87.4% [520/576] in RDS vs 77.5% [238/307] in Snowball) than the snowball

  4. Low temperature X-ray diffraction studies of natural gas hydrate samples from the Gulf of Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Rawn, C.J. [Oak Ridge National Laboratory, Oak Ridge, TN (United States). Materials Science and Technology Div.; Sassen, R. [Texas A and M Univ., College Station, TX (United States). Geochemical and Environmental Research Group; Ulrich, S.M.; Phelps, T.J. [Oak Ridge National Laboratory, Oak Ridge, TN (United States). Biosciences Div.; Chakoumakos, B.C. [Oak Ridge National Laboratory, Oak Ridge, TN (United States). Neutron Scattering Science Div.; Payzant, E.A. [Oak Ridge National Laboratory, Oak Ridge, TN (United States). Center for Nanophase Materials Science

    2008-07-01

    Quantitative studies of natural clathrate hydrates are hampered by the difficulties associated with obtaining pristine samples for the sea floor without comprising their integrity. This paper discussed X-ray power diffraction studies conducted to measure natural gas hydrate samples obtained from the Green Canyon in the Gulf of Mexico. Data on the hydrate deposits were initially collected in 2002. The X-ray diffraction data were collected in order to examine the structure 2 (s2) gas hydrates as functions of temperature and time. A diffractometer with a theta-theta goniometer modified with a helium closed cycle refrigerator and temperature controller was used. Aragonite, quartz and halite phases were determined in the decomposed sample. Refined phase fractions for both the ice and the s2 hydrate were obtained as a function of temperature. Results of the study demonstrated that the amount of hydrates decreased with increasing temperatures and amounts of time. Large pieces of the hydrate showed heterogenous ice content. Dissociation rates were higher at lower temperatures. It was concluded that unusual trends observed for the smaller lattice parameter of the hydrates resulted from the formation of ice layers that acted as barriers to the released gases and caused increased isostatic pressures around the hydrate core. 9 refs., 6 figs.

  5. Reconsidering the relation between serum homocysteine and red blood cell distribution width: a cross-sectional study of a large cohort.

    Science.gov (United States)

    Margalit, Ili; Cohen, Eytan; Goldberg, Elad; Krause, Ilan

    2018-07-01

    In a recent small sample study, red blood cell distribution width (RDW) was suggested as a predictor of homocysteine levels. The current study was aimed to reexamine this association in a large scale sample. A retrospective cross-sectional study of healthy adults, conducted at Rabin Medical Center, during 2000-2014. Data were retrieved from the medical charts and a logistic regression controlling for interfering factors was carried out. Sensitivity analysis was implemented by exclusion of individuals with anaemia. Five thousand, five hundred fifty-four healthy individuals were included. Mean serum homocysteine level was 10.10 (SD 2.72) μmol/L. 34.4% of the study population had a homocysteine level higher than the upper limit of normal (10.8 μmol/L). Homocysteine showed no association with RDW (OR 1.00; 95% CI 0.97-1.03), but increased with age (OR 1.05; 95% CI 1.04-1.06) and decreased with a rise in haemoglobin (OR 0.77; 95% CI 0.71-0.83), and in the mean corpuscular volume (OR 0.86; 95% CI 0.85-0.88). Exclusion of individuals with anaemia did not reveal an association between homocysteine and RDW but found a somewhat smaller association between haemoglobin and RDW [OR 0.82; 95% CI 0.73-0.91]. In our large scale sample we did not find an association between RDW and serum homocysteine.

  6. Relationship between homocysteine and coronary artery disease. Results from a large prospective cohort study.

    Science.gov (United States)

    Schaffer, Alon; Verdoia, Monica; Cassetti, Ettore; Marino, Paolo; Suryapranata, Harry; De Luca, Giuseppe

    2014-08-01

    Coronary artery disease (CAD) still represents the major cause of mortality in developed countries. Large research programs have been focused on the identification of new risk factors to prevent CAD, with special attention to homocysteine (Hcy), due to the known associated increased thrombogenicity, oxidative stress status and endothelial dysfunction. However, controversy still exists on the association between Hcy and CAD. Therefore, aim of the current study was to investigate the association of Hcy with the prevalence and extent of CAD in a large consecutive cohort of patients undergoing coronary angiography. Our population is represented by a total of 3056 consecutive patients undergoing coronary angiography between at the Azienda Ospedaliera "Maggiore della Carità", Novara, Italy. Fasting samples were collected for homocysteine levels assessment. Coronary disease was defined for at least 1 vessel stenosis>50% as evaluated by QCA. Study population was divided according to Hcy tertiles (18.2nmol/ml). High plasmatic level of homocysteine was related with age (pbenefits from vitamin administration in patients with elevated Hcy to prevent the occurrence and progression of CAD. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Patient-reported causes of heart failure in a large European sample

    DEFF Research Database (Denmark)

    Timmermans, Ivy; Denollet, Johan; Pedersen, Susanne S.

    2018-01-01

    ), psychosocial (35%, mainly (work-related) stress), and natural causes (32%, mainly heredity). There were socio-demographic, clinical and psychological group differences between the various categories, and large discrepancies between prevalence of physical risk factors according to medical records and patient...... distress (OR = 1.54, 95% CI = 0.94–2.51, p = 0.09), and behavioral causes and a less threatening view of heart failure (OR = 0.64, 95% CI = 0.40–1.01, p = 0.06). Conclusion: European patients most frequently reported comorbidities, smoking, stress, and heredity as heart failure causes, but their causal......Background: Patients diagnosed with chronic diseases develop perceptions about their disease and its causes, which may influence health behavior and emotional well-being. This is the first study to examine patient-reported causes and their correlates in patients with heart failure. Methods...

  8. Impacts of suppression on emotional responses and performance outcomes: an experience-sampling study in younger and older workers.

    Science.gov (United States)

    Yeung, Dannii Y; Fung, Helene H

    2012-11-01

    Past studies have demonstrated that older adults used less emotional suppression to regulate their emotions than did younger adults, but the effectiveness of using this emotion regulatory strategy on psychosocial well-being across age remains largely unexplored. The present study adopted an experience-sampling method to examine whether the impacts of momentary employment of emotional suppression on momentary positive and negative emotions and job performance would be different by age. Eighty-seven Chinese insurance workers, aged between 18 and 61 years, participated in a 5-day sampling study. Their affective responses at work, momentary task performance, and sales productivity were recorded. Results showed that older workers' greater use of suppression at work was associated with lower intensity of negative emotions, whereas such association was not found among younger workers. Moreover, greater use of suppression over the sampling period was significantly predictive of sales productivity of older workers, but such a positive association was not shown in younger workers. These findings reveal that the use of suppression at work may be more effective for older workers than for younger workers.

  9. High throughput sample processing and automated scoring

    Directory of Open Access Journals (Sweden)

    Gunnar eBrunborg

    2014-10-01

    Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.

  10. Method for Determination of Neptunium in Large-Sized Urine Samples Using Manganese Dioxide Coprecipitation and 242Pu as Yield Tracer

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2013-01-01

    A novel method for bioassay of large volumes of human urine samples using manganese dioxide coprecipitation for preconcentration was developed for rapid determination of 237Np. 242Pu was utilized as a nonisotopic tracer to monitor the chemical yield of 237Np. A sequential injection extraction chr...... and rapid analysis of neptunium contamination level for emergency preparedness....

  11. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    Science.gov (United States)

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  12. Application of tandem accelerator mass spectrometor to the chronological study of archaeological samples on Ryukyu Islands

    International Nuclear Information System (INIS)

    Taira, Hatsuo; Higa, Kenichi; Nakai, Nobuyuki; Nakamura, Toshio.

    1987-01-01

    Along with the urbanization of rural areas on Ryukyu Islands, many shell mounds and pre-historic sites have been found in resent years. Chrological studies of shell samples from these mounds will lead to the better understanding of cultural background for the pre-historic human activities on the Ryukyu Islands. C-14 dating by beta counting is the common method to obtain the ages of the archaeological samples. It is, however, very limitted in obtaining the absolute ages by the above mehtod due to the large sample sizes required and time consuming. There are many newly obtained archaeological samples left unstudied in detail. The alternate is a method called Tandem Accelerator Mass Spectrometer (AMS) installed at Nagoya University, which is composed of the tandem type accelerator to measure very low concentration of C-14 in archaeological samples. The system has been designed particularly to measure the radio-carbon and has advantages of being small sample size and very little time consuming for C-14 measurement as compared with the beta counting. It is the aim of this work to apply the above AMS for obtaining the absolute ages of the archaeological samples. The results agreed well with those estimated by the Erthenware method (relative method of dating), which ranged from 500 to 6000 y.b.p. The results may be helpful for the chronological arrangement of the samples and for the understanding of pre-historical human activities on the Ryukyu Islands. (author)

  13. Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study.

    Science.gov (United States)

    Broman, Karl W; Keller, Mark P; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S; Sen, Śaunak; Attie, Alan D

    2015-08-19

    In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual's eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. Copyright © 2015 Broman et al.

  14. Weather and headache onset: a large-scale study of headache medicine purchases

    Science.gov (United States)

    Ozeki, Kayoko; Noda, Tatsuya; Nakamura, Mieko; Ojima, Toshiyuki

    2015-04-01

    It is widely recognized that weather changes can trigger headache onset. Most people who develop headaches choose to self-medicate rather than visit a hospital or clinic. We investigated the association between weather and headache onset using large-sample sales of the headache medicine, loxoprofen. We collected daily sales figures of loxoprofen and over-the-counter drugs over a 1-year period from a drugstore chain in western Shizuoka prefecture, Japan. To adjust for changes in daily sales of loxoprofen due to social environmental factors, we calculated a proportion of loxoprofen daily sales to over-the-counter drug daily sales. At the same time, we obtained weather data for the study region from the website of the Japan Meteorological Agency. We performed linear regression analysis to ascertain the association between weather conditions and the loxoprofen daily sales proportion. We also conducted a separate questionnaire survey at the same drugstores to determine the reason why people purchased loxoprofen. Over the study period, we surveyed the sale of hundreds of thousands of loxoprofen tablets. Most people purchased loxoprofen because they had a headache. We found that the sales proportion of loxoprofen increased when average barometric pressure decreased, and that precipitation, average humidity, and minimum humidity increased on loxoprofen purchase days compared to the previous day of purchases. This study, performed using a large dataset that was easy-to-collect and representative of the general population, revealed that sales of loxoprofen, which can represent the onset and aggravation of headache, significantly increased with worsening weather conditions.

  15. A large-scale study of the ultrawideband microwave dielectric properties of normal breast tissue obtained from reduction surgeries.

    Science.gov (United States)

    Lazebnik, Mariya; McCartney, Leah; Popovic, Dijana; Watkins, Cynthia B; Lindstrom, Mary J; Harter, Josephine; Sewall, Sarah; Magliocco, Anthony; Booske, John H; Okoniewski, Michal; Hagness, Susan C

    2007-05-21

    The efficacy of emerging microwave breast cancer detection and treatment techniques will depend, in part, on the dielectric properties of normal breast tissue. However, knowledge of these properties at microwave frequencies has been limited due to gaps and discrepancies in previously reported small-scale studies. To address these issues, we experimentally characterized the wideband microwave-frequency dielectric properties of a large number of normal breast tissue samples obtained from breast reduction surgeries at the University of Wisconsin and University of Calgary hospitals. The dielectric spectroscopy measurements were conducted from 0.5 to 20 GHz using a precision open-ended coaxial probe. The tissue composition within the probe's sensing region was quantified in terms of percentages of adipose, fibroconnective and glandular tissues. We fit a one-pole Cole-Cole model to the complex permittivity data set obtained for each sample and determined median Cole-Cole parameters for three groups of normal breast tissues, categorized by adipose tissue content (0-30%, 31-84% and 85-100%). Our analysis of the dielectric properties data for 354 tissue samples reveals that there is a large variation in the dielectric properties of normal breast tissue due to substantial tissue heterogeneity. We observed no statistically significant difference between the within-patient and between-patient variability in the dielectric properties.

  16. A large-scale study of the ultrawideband microwave dielectric properties of normal breast tissue obtained from reduction surgeries

    International Nuclear Information System (INIS)

    Lazebnik, Mariya; McCartney, Leah; Popovic, Dijana; Watkins, Cynthia B; Lindstrom, Mary J; Harter, Josephine; Sewall, Sarah; Magliocco, Anthony; Booske, John H; Okoniewski, Michal; Hagness, Susan C

    2007-01-01

    The efficacy of emerging microwave breast cancer detection and treatment techniques will depend, in part, on the dielectric properties of normal breast tissue. However, knowledge of these properties at microwave frequencies has been limited due to gaps and discrepancies in previously reported small-scale studies. To address these issues, we experimentally characterized the wideband microwave-frequency dielectric properties of a large number of normal breast tissue samples obtained from breast reduction surgeries at University of Wisconsin and University of Calgary hospitals. The dielectric spectroscopy measurements were conducted from 0.5 to 20 GHz using a precision open-ended coaxial probe. The tissue composition within the probe's sensing region was quantified in terms of percentages of adipose, fibroconnective and glandular tissues. We fit a one-pole Cole-Cole model to the complex permittivity data set obtained for each sample and determined median Cole-Cole parameters for three groups of normal breast tissues, categorized by adipose tissue content (0-30%, 31-84% and 85-100%). Our analysis of the dielectric properties data for 354 tissue samples reveals that there is a large variation in the dielectric properties of normal breast tissue due to substantial tissue heterogeneity. We observed no statistically significant difference between the within-patient and between-patient variability in the dielectric properties

  17. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  18. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Directory of Open Access Journals (Sweden)

    Sebastian Höhna

    Full Text Available Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family and thus to maximize diversity (diversified sampling. So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa. The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa. Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model. Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species. All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear

  19. DNA cards: determinants of DNA yield and quality in collecting genetic samples for pharmacogenetic studies.

    Science.gov (United States)

    Mas, Sergi; Crescenti, Anna; Gassó, Patricia; Vidal-Taboada, Jose M; Lafuente, Amalia

    2007-08-01

    As pharmacogenetic studies frequently require establishment of DNA banks containing large cohorts with multi-centric designs, inexpensive methods for collecting and storing high-quality DNA are needed. The aims of this study were two-fold: to compare the amount and quality of DNA obtained from two different DNA cards (IsoCode Cards or FTA Classic Cards, Whatman plc, Brentford, Middlesex, UK); and to evaluate the effects of time and storage temperature, as well as the influence of anticoagulant ethylenediaminetetraacetic acid on the DNA elution procedure. The samples were genotyped by several methods typically used in pharmacogenetic studies: multiplex PCR, PCR-restriction fragment length polymorphism, single nucleotide primer extension, and allelic discrimination assay. In addition, they were amplified by whole genome amplification to increase genomic DNA mass. Time, storage temperature and ethylenediaminetetraacetic acid had no significant effects on either DNA card. This study reveals the importance of drying blood spots prior to isolation to avoid haemoglobin interference. Moreover, our results demonstrate that re-isolation protocols could be applied to increase the amount of DNA recovered. The samples analysed were accurately genotyped with all the methods examined herein. In conclusion, our study shows that both DNA cards, IsoCode Cards and FTA Classic Cards, facilitate genetic and pharmacogenetic testing for routine clinical practice.

  20. Relativistic rise measurements with very fine sampling intervals

    International Nuclear Information System (INIS)

    Ludlam, T.; Platner, E.D.; Polychronakos, V.A.; Lindenbaum, S.J.; Kramer, M.A.; Teramoto, Y.

    1980-01-01

    The motivation of this work was to determine whether the technique of charged particle identification via the relativistic rise in the ionization loss can be significantly improved by virtue of very small sampling intervals. A fast-sampling ADC and a longitudinal drift geometry were used to provide a large number of samples from a single drift chamber gap, achieving sampling intervals roughly 10 times smaller than any previous study. A single layer drift chamber was used, and tracks of 1 meter length were simulated by combining together samples from many identified particles in this detector. These data were used to study the resolving power for particle identification as a function of sample size, averaging technique, and the number of discrimination levels (ADC bits) used for pulse height measurements

  1. Test sample handling apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    A test sample handling apparatus using automatic scintillation counting for gamma detection, for use in such fields as radioimmunoassay, is described. The apparatus automatically and continuously counts large numbers of samples rapidly and efficiently by the simultaneous counting of two samples. By means of sequential ordering of non-sequential counting data, it is possible to obtain precisely ordered data while utilizing sample carrier holders having a minimum length. (U.K.)

  2. Characterization of large-insert DNA libraries from soil for environmental genomic studies of Archaea

    DEFF Research Database (Denmark)

    Treusch, Alexander H; Kletzin, Arnulf; Raddatz, Guenter

    2004-01-01

    Complex genomic libraries are increasingly being used to retrieve complete genes, operons or large genomic fragments directly from environmental samples, without the need to cultivate the respective microorganisms. We report on the construction of three large-insert fosmid libraries in total...... (approximately 1% each) have been captured in our libraries. The diversity of putative protein-encoding genes, as reflected by their distribution into different COG clusters, was comparable to that encoded in complete genomes of cultivated microorganisms. A huge variety of genomic fragments has been captured...

  3. Feasibility study on large pool-type LMFBR

    International Nuclear Information System (INIS)

    1984-01-01

    A feasibility study has been conducted from 1981 FY to 1983 FY, in order to evaluate the feasibility of a large pool-type LMFBR under the Japanese seismic design condition and safety design condition, etc. This study was aimed to establish an original reactor structure concept which meets those design conditions especially required in Japan. In the first year, preceding design concepts had been reviewed and several concepts were originated to be suitable to Japan. For typical two of them being selected by preliminary analysis, test programs were planned. In the second year, more than twenty tests with basic models had been conducted under severe conditions, concurrently analytical approaches were promoted. In the last year, larger model tests were conducted and analytical methods have been verified concerning hydrodynamic effects on structure vibration, thermo-hydraulic behaviours in reactor plena and so on. Finally the reactor structure concepts for a large pool-type LMFBR have been acknowledged to be feasible in Japan. (author)

  4. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  5. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  6. A protocol for large scale genomic DNA isolation for cacao genetics ...

    African Journals Online (AJOL)

    Advances in DNA technology, such as marker assisted selection, detection of quantitative trait loci and genomic selection also require the isolation of DNA from a large number of samples and the preservation of tissue samples for future use in cacao genome studies. The present study proposes a method for the ...

  7. Results of Self-Absorption Study on the Versapor 3000 Filters for Radioactive Particulate Air Sampling

    International Nuclear Information System (INIS)

    Barnett, J.M.

    2008-01-01

    Since the mid-1980s the Pacific Northwest National Laboratory (PNNL) has used a value of 0.85 as a correction factor for the self absorption of activity of particulate radioactive air samples. More recently, an effort was made to evaluate the current particulate radioactive air sample filters (Versapor(reg s ign) 3000) used at PNNL for self absorption effects. There were two methods used in the study, (1) to compare the radioactivity concentration by direct gas-flow proportional counting of the filter to the results obtained after acid digestion of the filter and counting again by gas-flow proportional detection and (2) to evaluate sample filters by high resolution visual/infrared microscopy to determine the depth of material loading on or in the filter fiber material. Sixty samples were selected from the archive for acid digestion in the first method and about 30 samples were selected for high resolution visual/infrared microscopy. Mass loading effects were also considered. From the sample filter analysis, large error is associated with the average self absorption factor, however, when the data is compared directly one-to-one, statistically, there appears to be good correlation between the two analytical methods. The mass loading of filters evaluated was <0.2 mg cm-2 and was also compared against other published results. The microscopy analysis shows the sample material remains on the top of the filter paper and does not imbed into the filter media. Results of the microscopy evaluation lead to the conclusion that there is not a mechanism for significant self absorption. The overall conclusion is that self-absorption is not a significant factor in the analysis of filters used at PNNL for radioactive air stack sampling of radionuclide particulates and that an applied correction factor is conservative in determining overall sample activity. A new self absorption factor of 1.0 is recommended

  8. Attenuation of species abundance distributions by sampling

    Science.gov (United States)

    Shimadzu, Hideyasu; Darnell, Ross

    2015-01-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  9. Keeping the Beat: A Large Sample Study of Bouncing and Clapping to Music

    Science.gov (United States)

    Tranchant, Pauline; Vuvan, Dominique T.; Peretz, Isabelle

    2016-01-01

    The vast majority of humans move in time with a musical beat. This behaviour has been mostly studied through finger-tapping synchronization. Here, we evaluate naturalistic synchronization responses to music–bouncing and clapping–in 100 university students. Their ability to match the period of their bounces and claps to those of a metronome and musical clips varying in beat saliency was assessed. In general, clapping was better synchronized with the beat than bouncing, suggesting that the choice of a specific movement type is an important factor to consider in the study of sensorimotor synchronization processes. Performance improved as a function of beat saliency, indicating that beat abstraction plays a significant role in synchronization. Fourteen percent of the population exhibited marked difficulties with matching the beat. Yet, at a group level, poor synchronizers showed similar sensitivity to movement type and beat saliency as normal synchronizers. These results suggest the presence of quantitative rather than qualitative variations when losing the beat. PMID:27471854

  10. Keeping the Beat: A Large Sample Study of Bouncing and Clapping to Music.

    Directory of Open Access Journals (Sweden)

    Pauline Tranchant

    Full Text Available The vast majority of humans move in time with a musical beat. This behaviour has been mostly studied through finger-tapping synchronization. Here, we evaluate naturalistic synchronization responses to music-bouncing and clapping-in 100 university students. Their ability to match the period of their bounces and claps to those of a metronome and musical clips varying in beat saliency was assessed. In general, clapping was better synchronized with the beat than bouncing, suggesting that the choice of a specific movement type is an important factor to consider in the study of sensorimotor synchronization processes. Performance improved as a function of beat saliency, indicating that beat abstraction plays a significant role in synchronization. Fourteen percent of the population exhibited marked difficulties with matching the beat. Yet, at a group level, poor synchronizers showed similar sensitivity to movement type and beat saliency as normal synchronizers. These results suggest the presence of quantitative rather than qualitative variations when losing the beat.

  11. Performance of informative priors skeptical of large treatment effects in clinical trials: A simulation study.

    Science.gov (United States)

    Pedroza, Claudia; Han, Weilu; Thanh Truong, Van Thi; Green, Charles; Tyson, Jon E

    2018-01-01

    One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student- t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50-2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23-4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.

  12. Methodological Considerations in Estimation of Phenotype Heritability Using Genome-Wide SNP Data, Illustrated by an Analysis of the Heritability of Height in a Large Sample of African Ancestry Adults.

    Directory of Open Access Journals (Sweden)

    Fang Chen

    Full Text Available Height has an extremely polygenic pattern of inheritance. Genome-wide association studies (GWAS have revealed hundreds of common variants that are associated with human height at genome-wide levels of significance. However, only a small fraction of phenotypic variation can be explained by the aggregate of these common variants. In a large study of African-American men and women (n = 14,419, we genotyped and analyzed 966,578 autosomal SNPs across the entire genome using a linear mixed model variance components approach implemented in the program GCTA (Yang et al Nat Genet 2010, and estimated an additive heritability of 44.7% (se: 3.7% for this phenotype in a sample of evidently unrelated individuals. While this estimated value is similar to that given by Yang et al in their analyses, we remain concerned about two related issues: (1 whether in the complete absence of hidden relatedness, variance components methods have adequate power to estimate heritability when a very large number of SNPs are used in the analysis; and (2 whether estimation of heritability may be biased, in real studies, by low levels of residual hidden relatedness. We addressed the first question in a semi-analytic fashion by directly simulating the distribution of the score statistic for a test of zero heritability with and without low levels of relatedness. The second question was addressed by a very careful comparison of the behavior of estimated heritability for both observed (self-reported height and simulated phenotypes compared to imputation R2 as a function of the number of SNPs used in the analysis. These simulations help to address the important question about whether today's GWAS SNPs will remain useful for imputing causal variants that are discovered using very large sample sizes in future studies of height, or whether the causal variants themselves will need to be genotyped de novo in order to build a prediction model that ultimately captures a large fraction of the

  13. Large scale nuclear structure studies

    International Nuclear Information System (INIS)

    Faessler, A.

    1985-01-01

    Results of large scale nuclear structure studies are reported. The starting point is the Hartree-Fock-Bogoliubov solution with angular momentum and proton and neutron number projection after variation. This model for number and spin projected two-quasiparticle excitations with realistic forces yields in sd-shell nuclei similar good results as the 'exact' shell-model calculations. Here the authors present results for a pf-shell nucleus 46 Ti and results for the A=130 mass region where they studied 58 different nuclei with the same single-particle energies and the same effective force derived from a meson exchange potential. They carried out a Hartree-Fock-Bogoliubov variation after mean field projection in realistic model spaces. In this way, they determine for each yrast state the optimal mean Hartree-Fock-Bogoliubov field. They apply this method to 130 Ce and 128 Ba using the same effective nucleon-nucleon interaction. (Auth.)

  14. Overweight and Obesity: Prevalence and Correlates in a Large Clinical Sample of Children with Autism Spectrum Disorder

    Science.gov (United States)

    Zuckerman, Katharine E.; Hill, Alison P.; Guion, Kimberly; Voltolina, Lisa; Fombonne, Eric

    2014-01-01

    Autism Spectrum Disorders (ASDs) and childhood obesity (OBY) are rising public health concerns. This study aimed to evaluate the prevalence of overweight (OWT) and OBY in a sample of 376 Oregon children with ASD, and to assess correlates of OWT and OBY in this sample. We used descriptive statistics, bivariate, and focused multivariate analyses to…

  15. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  16. Environmental monitoring study of linear alkylbenzene sulfonates and insoluble soap in Spanish sewage sludge samples.

    Science.gov (United States)

    Cantarero, Samuel; Zafra-Gómez, Alberto; Ballesteros, Oscar; Navalón, Alberto; Reis, Marco S; Saraiva, Pedro M; Vílchez, José L

    2011-01-01

    In this work we present a monitoring study of linear alkylbenzene sulfonates (LAS) and insoluble soap performed on Spanish sewage sludge samples. This work focuses on finding statistical relations between LAS concentrations and insoluble soap in sewage sludge samples and variables related to wastewater treatment plants such as water hardness, population and treatment type. It is worth to mention that 38 samples, collected from different Spanish regions, were studied. The statistical tool we used was Principal Component Analysis (PC), in order to reduce the number of response variables. The analysis of variance (ANOVA) test and a non-parametric test such as the Kruskal-Wallis test were also studied through the estimation of the p-value (probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true) in order to study possible relations between the concentration of both analytes and the rest of variables. We also compared LAS and insoluble soap behaviors. In addition, the results obtained for LAS (mean value) were compared with the limit value proposed by the future Directive entitled "Working Document on Sludge". According to the results, the mean obtained for soap and LAS was 26.49 g kg(-1) and 6.15 g kg(-1) respectively. It is worth noting that LAS mean was significantly higher than the limit value (2.6 g kg(-1)). In addition, LAS and soap concentrations depend largely on water hardness. However, only LAS concentration depends on treatment type.

  17. The Study on Mental Health at Work: Design and sampling.

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-08-01

    The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.

  18. The Study on Mental Health at Work: Design and sampling

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-01-01

    Aims: The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. Methods: S-MGA is a representative study of German employees aged 31–60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. Results: In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. Conclusions: There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment. PMID:28673202

  19. Explaining health care expenditure variation: large-sample evidence using linked survey and health administrative data.

    Science.gov (United States)

    Ellis, Randall P; Fiebig, Denzil G; Johar, Meliyanni; Jones, Glenn; Savage, Elizabeth

    2013-09-01

    Explaining individual, regional, and provider variation in health care spending is of enormous value to policymakers but is often hampered by the lack of individual level detail in universal public health systems because budgeted spending is often not attributable to specific individuals. Even rarer is self-reported survey information that helps explain this variation in large samples. In this paper, we link a cross-sectional survey of 267 188 Australians age 45 and over to a panel dataset of annual healthcare costs calculated from several years of hospital, medical and pharmaceutical records. We use this data to distinguish between cost variations due to health shocks and those that are intrinsic (fixed) to an individual over three years. We find that high fixed expenditures are positively associated with age, especially older males, poor health, obesity, smoking, cancer, stroke and heart conditions. Being foreign born, speaking a foreign language at home and low income are more strongly associated with higher time-varying expenditures, suggesting greater exposure to adverse health shocks. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  1. Study of β-NMR for Liquid Biological Samples

    CERN Document Server

    Beattie, Caitlin

    2017-01-01

    β-NMR is an exotic form of NMR spectroscopy that allows for the characterization of matter based on the anisotropic β-decay of radioactive probe nuclei. This has been shown to be an effective spectroscopic technique for many different compounds, but its use for liquid biological samples is relatively unexplored. The work at the VITO line of ISOLDE seeks to employ this technique to study such samples. Currently, preparations are being made for an experiment to characterize DNA G-quadruplexes and their interactions with stabilizing cations. More specifically, the work in which I engaged as a summer student focused on the experiment’s liquid handling system and the stability of the relevant biological samples under vacuum.

  2. The CUPID (Cultural and Psychosocial Influences on Disability study: methods of data collection and characteristics of study sample.

    Directory of Open Access Journals (Sweden)

    David Coggon

    Full Text Available The CUPID (Cultural and Psychosocial Influences on Disability study was established to explore the hypothesis that common musculoskeletal disorders (MSDs and associated disability are importantly influenced by culturally determined health beliefs and expectations. This paper describes the methods of data collection and various characteristics of the study sample.A standardised questionnaire covering musculoskeletal symptoms, disability and potential risk factors, was used to collect information from 47 samples of nurses, office workers, and other (mostly manual workers in 18 countries from six continents. In addition, local investigators provided data on economic aspects of employment for each occupational group. Participation exceeded 80% in 33 of the 47 occupational groups, and after pre-specified exclusions, analysis was based on 12,426 subjects (92 to 1018 per occupational group. As expected, there was high usage of computer keyboards by office workers, while nurses had the highest prevalence of heavy manual lifting in all but one country. There was substantial heterogeneity between occupational groups in economic and psychosocial aspects of work; three- to five-fold variation in awareness of someone outside work with musculoskeletal pain; and more than ten-fold variation in the prevalence of adverse health beliefs about back and arm pain, and in awareness of terms such as "repetitive strain injury" (RSI.The large differences in psychosocial risk factors (including knowledge and beliefs about MSDs between occupational groups should allow the study hypothesis to be addressed effectively.

  3. The CUPID (Cultural and Psychosocial Influences on Disability) Study: Methods of Data Collection and Characteristics of Study Sample

    Science.gov (United States)

    Coggon, David; Ntani, Georgia; Palmer, Keith T.; Felli, Vanda E.; Harari, Raul; Barrero, Lope H.; Felknor, Sarah A.; Gimeno, David; Cattrell, Anna; Serra, Consol; Bonzini, Matteo; Solidaki, Eleni; Merisalu, Eda; Habib, Rima R.; Sadeghian, Farideh; Kadir, Masood; Warnakulasuriya, Sudath S. P.; Matsudaira, Ko; Nyantumbu, Busisiwe; Sim, Malcolm R.; Harcombe, Helen; Cox, Ken; Marziale, Maria H.; Sarquis, Leila M.; Harari, Florencia; Freire, Rocio; Harari, Natalia; Monroy, Magda V.; Quintana, Leonardo A.; Rojas, Marianela; Salazar Vega, Eduardo J.; Harris, E. Clare; Vargas-Prada, Sergio; Martinez, J. Miguel; Delclos, George; Benavides, Fernando G.; Carugno, Michele; Ferrario, Marco M.; Pesatori, Angela C.; Chatzi, Leda; Bitsios, Panos; Kogevinas, Manolis; Oha, Kristel; Sirk, Tuuli; Sadeghian, Ali; Peiris-John, Roshini J.; Sathiakumar, Nalini; Wickremasinghe, A. Rajitha; Yoshimura, Noriko; Kielkowski, Danuta; Kelsall, Helen L.; Hoe, Victor C. W.; Urquhart, Donna M.; Derett, Sarah; McBride, David; Gray, Andrew

    2012-01-01

    Background The CUPID (Cultural and Psychosocial Influences on Disability) study was established to explore the hypothesis that common musculoskeletal disorders (MSDs) and associated disability are importantly influenced by culturally determined health beliefs and expectations. This paper describes the methods of data collection and various characteristics of the study sample. Methods/Principal Findings A standardised questionnaire covering musculoskeletal symptoms, disability and potential risk factors, was used to collect information from 47 samples of nurses, office workers, and other (mostly manual) workers in 18 countries from six continents. In addition, local investigators provided data on economic aspects of employment for each occupational group. Participation exceeded 80% in 33 of the 47 occupational groups, and after pre-specified exclusions, analysis was based on 12,426 subjects (92 to 1018 per occupational group). As expected, there was high usage of computer keyboards by office workers, while nurses had the highest prevalence of heavy manual lifting in all but one country. There was substantial heterogeneity between occupational groups in economic and psychosocial aspects of work; three- to five-fold variation in awareness of someone outside work with musculoskeletal pain; and more than ten-fold variation in the prevalence of adverse health beliefs about back and arm pain, and in awareness of terms such as “repetitive strain injury” (RSI). Conclusions/Significance The large differences in psychosocial risk factors (including knowledge and beliefs about MSDs) between occupational groups should allow the study hypothesis to be addressed effectively. PMID:22792189

  4. Large Deployable Reflector (LDR) feasibility study update

    Science.gov (United States)

    Alff, W. H.; Banderman, L. W.

    1983-01-01

    In 1982 a workshop was held to refine the science rationale for large deployable reflectors (LDR) and develop technology requirements that support the science rationale. At the end of the workshop, a set of LDR consensus systems requirements was established. The subject study was undertaken to update the initial LDR study using the new systems requirements. The study included mirror materials selection and configuration, thermal analysis, structural concept definition and analysis, dynamic control analysis and recommendations for further study. The primary emphasis was on the dynamic controls requirements and the sophistication of the controls system needed to meet LDR performance goals.

  5. Large aspect ratio tokamak study

    International Nuclear Information System (INIS)

    Reid, R.L.; Holmes, J.A.; Houlberg, W.A.; Peng, Y.K.M.; Strickler, D.J.; Brown, T.G.; Sardella, C.; Wiseman, G.W.

    1979-01-01

    The Large Aspect Ratio Tokamak Study (LARTS) investigated the potential for producing a viable long burn tokamak reactor through enhanced volt-second capability of the ohmic heating transformer by employing high aspect ratio designs. The plasma physics, engineering, and economic implications of high aspect ratio tokamaks were accessed in the context of extended burn operation. Plasma startup and burn parameters were addressed using a one-dimensional transport code. The pulsed electrical power requirements for the poloidal field system, which have a major impact on reactor economics, were minimized by optimizing the field in the ohmic heating coil and the wave shape of the ohmic heating discharge. A high aspect ratio reference reactor was chosen and configured

  6. Acoustically levitated droplets: a contactless sampling method for fluorescence studies.

    Science.gov (United States)

    Leiterer, Jork; Grabolle, Markus; Rurack, Knut; Resch-Genger, Ute; Ziegler, Jan; Nann, Thomas; Panne, Ulrich

    2008-01-01

    Acoustic levitation is used as a new tool to study concentration-dependent processes in fluorescence spectroscopy. With this technique, small amounts of liquid and solid samples can be measured without the need for sample supports or containers, which often limits signal acquisition and can even alter sample properties due to interactions with the support material. We demonstrate that, because of the small sample volume, fluorescence measurements at high concentrations of an organic dye are possible without the limitation of inner-filter effects, which hamper such experiments in conventional, cuvette-based measurements. Furthermore, we show that acoustic levitation of liquid samples provides an experimentally simple way to study distance-dependent fluorescence modulations in semiconductor nanocrystals. The evaporation of the solvent during levitation leads to a continuous increase of solute concentration and can easily be monitored by laser-induced fluorescence.

  7. Large magnetoresistance in intercalated Cu oxides

    OpenAIRE

    Grigoryan, L.; Furusawa, M.; Hori, H.; Tokumoto, M.

    1997-01-01

    Magnetism and electrical resistance as a function of magnetic field, temperature, and chemical composition are studied in Cu oxides intercalated with metal phthalocyanines MPc, where M is Fe or Ni, and Pc is C_H_N_. An unusually large positive magnetoresistance (MR) of ~ 1200% is observed in FePc-intercalated Bi_Sr_Ca_Cu_O_ samples with two Cu-O layers in the unit cell (n=2). The magnitude of the MR decreased to 40% and ~ 0% in the FePc-intercalated n=3 and n=4 samples, respectively, and to ~...

  8. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-21

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer

  9. Use of Sequenom sample ID Plus® SNP genotyping in identification of FFPE tumor samples.

    Directory of Open Access Journals (Sweden)

    Jessica K Miller

    Full Text Available Short tandem repeat (STR analysis, such as the AmpFlSTR® Identifiler® Plus kit, is a standard, PCR-based human genotyping method used in the field of forensics. Misidentification of cell line and tissue DNA can be costly if not detected early; therefore it is necessary to have quality control measures such as STR profiling in place. A major issue in large-scale research studies involving archival formalin-fixed paraffin embedded (FFPE tissues is that varying levels of DNA degradation can result in failure to correctly identify samples using STR genotyping. PCR amplification of STRs of several hundred base pairs is not always possible when DNA is degraded. The Sample ID Plus® panel from Sequenom allows for human DNA identification and authentication using SNP genotyping. In comparison to lengthy STR amplicons, this multiplexing PCR assay requires amplification of only 76-139 base pairs, and utilizes 47 SNPs to discriminate between individual samples. In this study, we evaluated both STR and SNP genotyping methods of sample identification, with a focus on paired FFPE tumor/normal DNA samples intended for next-generation sequencing (NGS. The ability to successfully validate the identity of FFPE samples can enable cost savings by reducing rework.

  10. Assessing the sustainable construction of large construction companies in Malaysia

    Science.gov (United States)

    Adewale, Bamgbade Jibril; Mohammed, Kamaruddeen Ahmed; Nasrun, Mohd Nawi Mohd

    2016-08-01

    Considering the increasing concerns for the consideration of sustainability issues in construction project delivery within the construction industry, this paper assesses the extent of sustainable construction among Malaysian large contractors, in order to ascertain the level of the industry's impacts on both the environment and the society. Sustainable construction explains the construction industry's responsibility to efficiently utilise the finite resources while also reducing construction impacts on both humans and the environment throughout the phases of construction. This study used proportionate stratified random sampling to conduct a field study with a sample of 172 contractors out of the 708 administered questionnaires. Data were collected from large contractors in the eleven states of peninsular Malaysia. Using the five-level rating scale (which include: 1= Very Low; 2= Low; 3= Moderate; 4= High; 5= Very High) to describe the level of sustainable construction of Malaysian contractors based on previous studies, statistical analysis reveals that environmental, social and economic sustainability of Malaysian large contractors are high.

  11. Chorionic villus sampling and amniocentesis.

    Science.gov (United States)

    Brambati, Bruno; Tului, Lucia

    2005-04-01

    The advantages and disadvantages of common invasive methods for prenatal diagnosis are presented in light of new investigations. Several aspects of first-trimester chorionic villus sampling and mid-trimester amniocentesis remain controversial, especially fetal loss rate, feto-maternal complications, and the extension of both sampling methods to less traditional gestational ages (early amniocentesis, late chorionic villus sampling), all of which complicate genetic counseling. A recent randomized trial involving early amniocentesis and late chorionic villus sampling has confirmed previous studies, leading to the unquestionable conclusion that transabdominal chorionic villus sampling is safer. The old dispute over whether limb reduction defects are caused by chorionic villus sampling gains new vigor, with a paper suggesting that this technique has distinctive teratogenic effects. The large experience involving maternal and fetal complications following mid-trimester amniocentesis allows a better estimate of risk for comparison with chorionic villus sampling. Transabdominal chorionic villus sampling, which appears to be the gold standard sampling method for genetic investigations between 10 and 15 completed weeks, permits rapid diagnosis in high-risk cases detected by first-trimester screening of aneuploidies. Sampling efficiency and karyotyping reliability are as high as in mid-trimester amniocentesis with fewer complications, provided the operator has the required training, skill and experience.

  12. Dynamics of acoustically levitated disk samples.

    Science.gov (United States)

    Xie, W J; Wei, B

    2004-10-01

    The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King's theory, and a larger force can be obtained for thin disks. When the disk aspect ratio gamma is larger than a critical value gamma(*) ( approximately 1.9 ) and the disk radius a is smaller than the critical value a(*) (gamma) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples ( gammaacoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval H(n) . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.

  13. The impact of study design and diagnostic approach in a large multi-centre ADHD study: Part 2: Dimensional measures of psychopathology and intelligence

    Directory of Open Access Journals (Sweden)

    Roeyers Herbert

    2011-04-01

    Full Text Available Abstract Background The International Multi-centre ADHD Genetics (IMAGE project with 11 participating centres from 7 European countries and Israel has collected a large behavioural and genetic database for present and future research. Behavioural data were collected from 1068 probands with ADHD and 1446 unselected siblings. The aim was to describe and analyse questionnaire data and IQ measures from all probands and siblings. In particular, to investigate the influence of age, gender, family status (proband vs. sibling, informant, and centres on sample homogeneity in psychopathological measures. Methods Conners' Questionnaires, Strengths and Difficulties Questionnaires, and Wechsler Intelligence Scores were used to describe the phenotype of the sample. Data were analysed by use of robust statistical multi-way procedures. Results Besides main effects of age, gender, informant, and centre, there were considerable interaction effects on questionnaire data. The larger differences between probands and siblings at home than at school may reflect contrast effects in the parents. Furthermore, there were marked gender by status effects on the ADHD symptom ratings with girls scoring one standard deviation higher than boys in the proband sample but lower than boys in the siblings sample. The multi-centre design is another important source of heterogeneity, particularly in the interaction with the family status. To a large extent the centres differed from each other with regard to differences between proband and sibling scores. Conclusions When ADHD probands are diagnosed by use of fixed symptom counts, the severity of the disorder in the proband sample may markedly differ between boys and girls and across age, particularly in samples with a large age range. A multi-centre design carries the risk of considerable phenotypic differences between centres and, consequently, of additional heterogeneity of the sample even if standardized diagnostic procedures are

  14. The impact of study design and diagnostic approach in a large multi-centre ADHD study: Part 2: Dimensional measures of psychopathology and intelligence.

    LENUS (Irish Health Repository)

    Muller, Ueli C

    2011-04-07

    Abstract Background The International Multi-centre ADHD Genetics (IMAGE) project with 11 participating centres from 7 European countries and Israel has collected a large behavioural and genetic database for present and future research. Behavioural data were collected from 1068 probands with ADHD and 1446 unselected siblings. The aim was to describe and analyse questionnaire data and IQ measures from all probands and siblings. In particular, to investigate the influence of age, gender, family status (proband vs. sibling), informant, and centres on sample homogeneity in psychopathological measures. Methods Conners\\' Questionnaires, Strengths and Difficulties Questionnaires, and Wechsler Intelligence Scores were used to describe the phenotype of the sample. Data were analysed by use of robust statistical multi-way procedures. Results Besides main effects of age, gender, informant, and centre, there were considerable interaction effects on questionnaire data. The larger differences between probands and siblings at home than at school may reflect contrast effects in the parents. Furthermore, there were marked gender by status effects on the ADHD symptom ratings with girls scoring one standard deviation higher than boys in the proband sample but lower than boys in the siblings sample. The multi-centre design is another important source of heterogeneity, particularly in the interaction with the family status. To a large extent the centres differed from each other with regard to differences between proband and sibling scores. Conclusions When ADHD probands are diagnosed by use of fixed symptom counts, the severity of the disorder in the proband sample may markedly differ between boys and girls and across age, particularly in samples with a large age range. A multi-centre design carries the risk of considerable phenotypic differences between centres and, consequently, of additional heterogeneity of the sample even if standardized diagnostic procedures are used. These

  15. AN EXPANDED VERY LARGE ARRAY AND CARMA STUDY OF DUSTY DISKS AND TORII WITH LARGE GRAINS IN DYING STARS

    International Nuclear Information System (INIS)

    Sahai, R.; Claussen, M. J.; Schnee, S.; Morris, M. R.; Sanchez Contreras, C.

    2011-01-01

    We report the results of a pilot multiwavelength survey in the radio continuum (X, Ka, and Q bands, i.e., from 3.6 cm to 7 mm) carried out with the Expanded Very Large Array (EVLA) in order to confirm the presence of very large dust grains in dusty disks and torii around the central stars in a small sample of post-asymptotic giant branch (pAGB) objects, as inferred from millimeter (mm) and submillimeter (submm) observations. Supporting mm-wave observations were also obtained with the Combined Array for Research in Millimeter-wave Astronomy toward three of our sources. Our EVLA survey has resulted in a robust detection of our most prominent submm emission source, the pre-planetary nebula (PPN) IRAS 22036+5306, in all three bands, and the disk-prominent pAGB object, RV Tau, in one band. The observed fluxes are consistent with optically thin free-free emission, and since they are insignificant compared to their submm/mm fluxes, we conclude that the latter must come from substantial masses of cool, large (mm-sized) grains. We find that the power-law emissivity in the cm-to-submm range for the large grains in IRAS22036 is ν β , with β = 1-1.3. Furthermore, the value of β in the 3-0.85 mm range for the three disk-prominent pAGB sources (β ≤ 0.4) is significantly lower than that of IRAS22036, suggesting that the grains in pAGB objects with circumbinary disks are likely larger than those in the dusty waists of pre-planetary nebulae.

  16. Functioning strategy study on control systems of large physical installations used with a digital computer

    International Nuclear Information System (INIS)

    Bel'man, L.B.; Lavrikov, S.A.; Lenskij, O.D.

    1975-01-01

    A criterion to evaluate the efficiency of a control system functioning of large physical installations by means of a control computer. The criteria are the object utilization factor and computer load factor. Different strategies of control system functioning are described, and their comparative analysis is made. A choice of such important parameters as sampling time and parameter correction time is made. A single factor to evaluate the system functioning efficiency is introduced and its dependence on the sampling interval value is given. Using diagrams attached, it is easy to find the optimum value of the sampling interval and the corresponding maximum value of the single efficiency factor proposed

  17. Sampling free energy surfaces as slices by combining umbrella sampling and metadynamics.

    Science.gov (United States)

    Awasthi, Shalini; Kapil, Venkat; Nair, Nisanth N

    2016-06-15

    Metadynamics (MTD) is a very powerful technique to sample high-dimensional free energy landscapes, and due to its self-guiding property, the method has been successful in studying complex reactions and conformational changes. MTD sampling is based on filling the free energy basins by biasing potentials and thus for cases with flat, broad, and unbound free energy wells, the computational time to sample them becomes very large. To alleviate this problem, we combine the standard Umbrella Sampling (US) technique with MTD to sample orthogonal collective variables (CVs) in a simultaneous way. Within this scheme, we construct the equilibrium distribution of CVs from biased distributions obtained from independent MTD simulations with umbrella potentials. Reweighting is carried out by a procedure that combines US reweighting and Tiwary-Parrinello MTD reweighting within the Weighted Histogram Analysis Method (WHAM). The approach is ideal for a controlled sampling of a CV in a MTD simulation, making it computationally efficient in sampling flat, broad, and unbound free energy surfaces. This technique also allows for a distributed sampling of a high-dimensional free energy surface, further increasing the computational efficiency in sampling. We demonstrate the application of this technique in sampling high-dimensional surface for various chemical reactions using ab initio and QM/MM hybrid molecular dynamics simulations. Further, to carry out MTD bias reweighting for computing forward reaction barriers in ab initio or QM/MM simulations, we propose a computationally affordable approach that does not require recrossing trajectories. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. Increased body mass index predicts severity of asthma symptoms but not objective asthma traits in a large sample of asthmatics

    DEFF Research Database (Denmark)

    Bildstrup, Line; Backer, Vibeke; Thomsen, Simon Francis

    2015-01-01

    AIM: To examine the relationship between body mass index (BMI) and different indicators of asthma severity in a large community-based sample of Danish adolescents and adults. METHODS: A total of 1186 subjects, 14-44 years of age, who in a screening questionnaire had reported a history of airway...... symptoms suggestive of asthma and/or allergy, or who were taking any medication for these conditions were clinically examined. All participants were interviewed about respiratory symptoms and furthermore height and weight, skin test reactivity, lung function, and airway responsiveness were measured...

  19. Audit sampling: A qualitative study on the role of statistical and non-statistical sampling approaches on audit practices in Sweden

    OpenAIRE

    Ayam, Rufus Tekoh

    2011-01-01

    PURPOSE: The two approaches to audit sampling; statistical and nonstatistical have been examined in this study. The overall purpose of the study is to explore the current extent at which statistical and nonstatistical sampling approaches are utilized by independent auditors during auditing practices. Moreover, the study also seeks to achieve two additional purposes; the first is to find out whether auditors utilize different sampling techniques when auditing SME´s (Small and Medium-Sized Ente...

  20. Optimizing liquid effluent monitoring at a large nuclear complex.

    Science.gov (United States)

    Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M

    2003-12-01

    Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.

  1. The Kinematics of the Permitted C ii λ 6578 Line in a Large Sample of Planetary Nebulae

    Energy Technology Data Exchange (ETDEWEB)

    Richer, Michael G.; Suárez, Genaro; López, José Alberto; García Díaz, María Teresa, E-mail: richer@astrosen.unam.mx, E-mail: gsuarez@astro.unam.mx, E-mail: jal@astrosen.unam.mx, E-mail: tere@astro.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ensenada, Baja California (Mexico)

    2017-03-01

    We present spectroscopic observations of the C ii λ 6578 permitted line for 83 lines of sight in 76 planetary nebulae at high spectral resolution, most of them obtained with the Manchester Echelle Spectrograph on the 2.1 m telescope at the Observatorio Astronómico Nacional on the Sierra San Pedro Mártir. We study the kinematics of the C ii λ 6578 permitted line with respect to other permitted and collisionally excited lines. Statistically, we find that the kinematics of the C ii λ 6578 line are not those expected if this line arises from the recombination of C{sup 2+} ions or the fluorescence of C{sup +} ions in ionization equilibrium in a chemically homogeneous nebular plasma, but instead its kinematics are those appropriate for a volume more internal than expected. The planetary nebulae in this sample have well-defined morphology and are restricted to a limited range in H α line widths (no large values) compared to their counterparts in the Milky Way bulge; both these features could be interpreted as the result of young nebular shells, an inference that is also supported by nebular modeling. Concerning the long-standing discrepancy between chemical abundances inferred from permitted and collisionally excited emission lines in photoionized nebulae, our results imply that multiple plasma components occur commonly in planetary nebulae.

  2. RNA-seq: technical variability and sampling

    Science.gov (United States)

    2011-01-01

    Background RNA-seq is revolutionizing the way we study transcriptomes. mRNA can be surveyed without prior knowledge of gene transcripts. Alternative splicing of transcript isoforms and the identification of previously unknown exons are being reported. Initial reports of differences in exon usage, and splicing between samples as well as quantitative differences among samples are beginning to surface. Biological variation has been reported to be larger than technical variation. In addition, technical variation has been reported to be in line with expectations due to random sampling. However, strategies for dealing with technical variation will differ depending on the magnitude. The size of technical variance, and the role of sampling are examined in this manuscript. Results In this study three independent Solexa/Illumina experiments containing technical replicates are analyzed. When coverage is low, large disagreements between technical replicates are apparent. Exon detection between technical replicates is highly variable when the coverage is less than 5 reads per nucleotide and estimates of gene expression are more likely to disagree when coverage is low. Although large disagreements in the estimates of expression are observed at all levels of coverage. Conclusions Technical variability is too high to ignore. Technical variability results in inconsistent detection of exons at low levels of coverage. Further, the estimate of the relative abundance of a transcript can substantially disagree, even when coverage levels are high. This may be due to the low sampling fraction and if so, it will persist as an issue needing to be addressed in experimental design even as the next wave of technology produces larger numbers of reads. We provide practical recommendations for dealing with the technical variability, without dramatic cost increases. PMID:21645359

  3. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  4. Comparison of Health Risks and Changes in Risks over Time Among a Sample of Lesbian, Gay, Bisexual, and Heterosexual Employees at a Large Firm.

    Science.gov (United States)

    Mitchell, Rebecca J; Ozminkowski, Ronald J

    2017-04-01

    The objective of this study was to estimate the prevalence of health risk factors by sexual orientation over a 4-year period within a sample of employees from a large firm. Propensity score-weighted generalized linear regression models were used to estimate the proportion of employees at high risk for health problems in each year and over time, controlling for many factors. Analyses were conducted with 6 study samples based on sex and sexual orientation. Rates of smoking, stress, and certain other health risk factors were higher for lesbian, gay, and bisexual (LGB) employees compared with rates of these risks among straight employees. Lesbian, gay, and straight employees successfully reduced risk levels in many areas. Significant reductions were realized for the proportion at risk for high stress and low life satisfaction among gay and lesbian employees, and for the proportion of smokers among gay males. Comparing changes over time for sexual orientation groups versus other employee groups showed that improvements and reductions in risk levels for most health risk factors examined occurred at similar rates among individuals employed by this firm, regardless of sexual orientation. These results can help improve understanding of LGB health and provide information on where to focus workplace health promotion efforts to meet the health needs of LGB employees.

  5. Femtosecond time-resolved studies of coherent vibrational Raman scattering in large gas-phase molecules

    International Nuclear Information System (INIS)

    Hayden, C.C.; Chandler, D.W.

    1995-01-01

    Results are presented from femtosecond time-resolved coherent Raman experiments in which we excite and monitor vibrational coherence in gas-phase samples of benzene and 1,3,5-hexatriene. Different physical mechanisms for coherence decay are seen in these two molecules. In benzene, where the Raman polarizability is largely isotropic, the Q branch of the vibrational Raman spectrum is the primary feature excited. Molecules in different rotational states have different Q-branch transition frequencies due to vibration--rotation interaction. Thus, the macroscopic polarization that is observed in these experiments decays because it has many frequency components from molecules in different rotational states, and these frequency components go out of phase with each other. In 1,3,5-hexatriene, the Raman excitation produces molecules in a coherent superposition of rotational states, through (O, P, R, and S branch) transitions that are strong due to the large anisotropy of the Raman polarizability. The coherent superposition of rotational states corresponds to initially spatially oriented, vibrationally excited, molecules that are freely rotating. The rotation of molecules away from the initial orientation is primarily responsible for the coherence decay in this case. These experiments produce large (∼10% efficiency) Raman shifted signals with modest excitation pulse energies (10 μJ) demonstrating the feasibility of this approach for a variety of gas phase studies. copyright 1995 American Institute of Physics

  6. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  7. Generalizability of causal inference in observational studies under retrospective convenience sampling.

    Science.gov (United States)

    Hu, Zonghui; Qin, Jing

    2018-05-20

    Many observational studies adopt what we call retrospective convenience sampling (RCS). With the sample size in each arm prespecified, RCS randomly selects subjects from the treatment-inclined subpopulation into the treatment arm and those from the control-inclined into the control arm. Samples in each arm are representative of the respective subpopulation, but the proportion of the 2 subpopulations is usually not preserved in the sample data. We show in this work that, under RCS, existing causal effect estimators actually estimate the treatment effect over the sample population instead of the underlying study population. We investigate how to correct existing methods for consistent estimation of the treatment effect over the underlying population. Although RCS is adopted in medical studies for ethical and cost-effective purposes, it also has a big advantage for statistical inference: When the tendency to receive treatment is low in a study population, treatment effect estimators under RCS, with proper correction, are more efficient than their parallels under random sampling. These properties are investigated both theoretically and through numerical demonstration. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  8. Sample Return Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — This Challenge requires demonstration of an autonomous robotic system to locate and collect a set of specific sample types from a large planetary analog area and...

  9. The Depression Anxiety Stress Scales (DASS): normative data and latent structure in a large non-clinical sample.

    Science.gov (United States)

    Crawford, John R; Henry, Julie D

    2003-06-01

    To provide UK normative data for the Depression Anxiety and Stress Scale (DASS) and test its convergent, discriminant and construct validity. Cross-sectional, correlational and confirmatory factor analysis (CFA). The DASS was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,771) in terms of demographic variables. Competing models of the latent structure of the DASS were derived from theoretical and empirical sources and evaluated using confirmatory factor analysis. Correlational analysis was used to determine the influence of demographic variables on DASS scores. The convergent and discriminant validity of the measure was examined through correlating the measure with two other measures of depression and anxiety (the HADS and the sAD), and a measure of positive and negative affectivity (the PANAS). The best fitting model (CFI =.93) of the latent structure of the DASS consisted of three correlated factors corresponding to the depression, anxiety and stress scales with correlated error permitted between items comprising the DASS subscales. Demographic variables had only very modest influences on DASS scores. The reliability of the DASS was excellent, and the measure possessed adequate convergent and discriminant validity Conclusions: The DASS is a reliable and valid measure of the constructs it was intended to assess. The utility of this measure for UK clinicians is enhanced by the provision of large sample normative data.

  10. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  11. Experimental study of glass sampling devices

    International Nuclear Information System (INIS)

    Jouan, A.; Moncouyoux, J.P.; Meyere, A.

    1992-01-01

    Two high-level liquid waste containment glass sampling systems have been designed and built. The first device fits entirely inside a standard glass storage canister, and may thus be used in facilities not initially designed for this function. It has been tested successfully in the nonradioactive prototype unit at Marcoule. The work primarily covered the design and construction of an articulated arm supporting the sampling vessel, and the mechanisms necessary for filling the vessel and recovering the sample. System actuation and operation are fully automatic, and the resulting sample is representative of the glass melt. Implementation of the device is delicate however, and its reliability is estimated at about 75%. A second device was designed specifically for new vitrification facilities. It is installed directly on the glass melting furnace, and meets process operating and quality control requirements. Tests conducted at the Marcoule prototype vitrification facility demonstrated the feasibility of the system. Special attention was given to the sampling vessel transfer mechanisms, with two filling and controlled sample cooling options

  12. A prototype splitter apparatus for dividing large catches of small fish

    Science.gov (United States)

    Stapanian, Martin A.; Edwards, William H.

    2012-01-01

    Due to financial and time constraints, it is often necessary in fisheries studies to divide large samples of fish and estimate total catch from the subsample. The subsampling procedure may involve potential human biases or may be difficult to perform in rough conditions. We present a prototype gravity-fed splitter apparatus for dividing large samples of small fish (30–100 mm TL). The apparatus features a tapered hopper with a sliding and removable shutter. The apparatus provides a comparatively stable platform for objectively obtaining subsamples, and it can be modified to accommodate different sizes of fish and different sample volumes. The apparatus is easy to build, inexpensive, and convenient to use in the field. To illustrate the performance of the apparatus, we divided three samples (total N = 2,000 fish) composed of four fish species. Our results indicated no significant bias in estimating either the number or proportion of each species from the subsample. Use of this apparatus or a similar apparatus can help to standardize subsampling procedures in large surveys of fish. The apparatus could be used for other applications that require dividing a large amount of material into one or more smaller subsamples.

  13. Generalized sampling in Julia

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Nielsen, Morten; Rasmussen, Morten Grud

    2017-01-01

    Generalized sampling is a numerically stable framework for obtaining reconstructions of signals in different bases and frames from their samples. For example, one can use wavelet bases for reconstruction given frequency measurements. In this paper, we will introduce a carefully documented toolbox...... for performing generalized sampling in Julia. Julia is a new language for technical computing with focus on performance, which is ideally suited to handle the large size problems often encountered in generalized sampling. The toolbox provides specialized solutions for the setup of Fourier bases and wavelets....... The performance of the toolbox is compared to existing implementations of generalized sampling in MATLAB....

  14. Association between the prevalence of depression and age in a large representative German sample of people aged 53 to 80 years.

    Science.gov (United States)

    Wild, Beate; Herzog, Wolfgang; Schellberg, Dieter; Lechner, Sabine; Niehoff, Doro; Brenner, Hermann; Rothenbacher, Dietrich; Stegmaier, Christa; Raum, Elke

    2012-04-01

    The aim of the study was to determine the association between the prevalence of clinically significant depression and age in a large representative sample of elderly German people. In the second follow-up (2005-2007) of the ESTHER cohort study, the 15-item geriatric depression scale (GDS-15) as well as a sociodemographic and clinical questionnaire were administered to a representative sample of 8270 people of ages 53 to 80 years. The prevalence of clinically significant depression was estimated using a GDS cut-off score of 5/6. Prevalence rates were estimated for the different age categories. Association between depression and age was analyzed using logistic regression, adjusted for gender, co-morbid medical disorders, education, marital status, physical activity, smoking, self-perceived cognitive impairment, and anti-depressive medication. Of the participants, 7878 (95.3%) completed more than twelve GDS items and were included in the study. The prevalence of clinically significant depression was 16.0% (95%CI = [15.2; 16.6]). The function of depression prevalence dependent on age group showed a U-shaped pattern (53-59: 21.0%, CI = [18.9; 23.3]; 60-64: 17.7%, CI = [15.7; 19.7]; 65-69: 12.6%, CI = [11.2; 14.0]; 70-74: 14.4%, CI = [12.6; 16.0]; 75-80: 17.1%, CI = [14.9; 19.4]). Adjusted odds ratios showed that the chances of being depressive decrease with the age category but remain relatively stable for people aged 65 and over. The prevalence of depression in the elderly seems to be associated with the age category. Adjusted odds ratios showed that people aged 60 and older had lower chances of being depressive than people aged 53 to 59 years. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  16. Studies on reconstruction of large skin defects following mammary tumor excision in dogs

    Directory of Open Access Journals (Sweden)

    Sabarish Babu Malli Sadhasivan

    2017-12-01

    Full Text Available Aim: The main objective of the study was to describe the use of skin fold advancement flaps (SFAFs and other reconstructive techniques for closure of large skin defects following mammary tumor excision in dogs. Materials and Methods: Twelve dogs underwent reconstruction of large ventral skin defects following mammary tumor excision with wide margins. Skin fold flaps (flank fold flap and elbow fold flap were elevated from the flank and elbow region, respectively, and transposed and sutured onto the large ventral skin defect following mastectomy in all the dogs. In addition to the skin fold flaps, other reconstructive techniques such as undermining, walking sutures, and tension-relieving suture techniques were followed during surgery in the closure of large skin defects without skin tension and compromising limb mobility. The skin flap viability was assessed subjectively by gross observation of the flap such as color, temperature, capillary perfusion, and cosmetic appearance, and scoring (1-4 was done. Tissue samples were collected from a surgical site on days 3, 6, and 12 post-operatively for histopathological evaluation and healing status of the skin flap. Results: All the surgical wounds healed primarily, without any major complications and the skin flap remained healthy throughout the healing process post-operatively. Distal flap necrosis was noticed in one case and necrosis of skin flap between two suture lines was noticed in another case in which the necrotized distal portion healed by secondary intention after 7 days. The mean survival of subdermal plexus flap in the above cases was 98% which was a subjective evaluation based on surface area of the skin defect measured by Image 'J software and the flap dimensions. The average healing of skin flap in days was 14.91±0.86. Conclusion: The SFAFs along with other reconstructive techniques help in the reconstruction of large ventral skin defects following mastectomy in dogs without much

  17. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  18. 'Intelligent' approach to radioimmunoassay sample counting employing a microprocessor controlled sample counter

    International Nuclear Information System (INIS)

    Ekins, R.P.; Sufi, S.; Malan, P.G.

    1977-01-01

    The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore particularly imperitive that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. The majority of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be releted to the counting errors for that sample. It is the objective of this presentation to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5-10 fold to be made. (orig.) [de

  19. Consistent associations between measures of psychological stress and CMV antibody levels in a large occupational sample

    NARCIS (Netherlands)

    Rector, J.L.; Dowd, J.B.; Loerbroks, A.; Burns, V.E.; Moss, P.A.; Jarczok, M.N.; Stalder, T.; Hoffman, K.; Fischer, J.E.; Bosch, J.A.

    2014-01-01

    Cytomegalovirus (CMV) is a herpes virus that has been implicated in biological aging and impaired health. Evidence, largely accrued from small-scale studies involving select populations, suggests that stress may promote non-clinical reactivation of this virus. However, absent is evidence from larger

  20. Examining gray matter structures associated with individual differences in global life satisfaction in a large sample of young adults

    Science.gov (United States)

    Kong, Feng; Ding, Ke; Yang, Zetian; Dang, Xiaobin; Hu, Siyuan; Song, Yiying

    2015-01-01

    Although much attention has been directed towards life satisfaction that refers to an individual’s general cognitive evaluations of his or her life as a whole, little is known about the neural basis underlying global life satisfaction. In this study, we used voxel-based morphometry to investigate the structural neural correlates of life satisfaction in a large sample of young healthy adults (n = 299). We showed that individuals’ life satisfaction was positively correlated with the regional gray matter volume (rGMV) in the right parahippocampal gyrus (PHG), and negatively correlated with the rGMV in the left precuneus and left ventromedial prefrontal cortex. This pattern of results remained significant even after controlling for the effect of general positive and negative affect, suggesting a unique structural correlates of life satisfaction. Furthermore, we found that self-esteem partially mediated the association between the PHG volume and life satisfaction as well as that between the precuneus volume and global life satisfaction. Taken together, we provide the first evidence for the structural neural basis of life satisfaction, and highlight that self-esteem might play a crucial role in cultivating an individual’s life satisfaction. PMID:25406366

  1. A large-scale study of the ultrawideband microwave dielectric properties of normal, benign and malignant breast tissues obtained from cancer surgeries

    Energy Technology Data Exchange (ETDEWEB)

    Lazebnik, Mariya [Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI (United States); Popovic, Dijana [Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB (Canada); McCartney, Leah [Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB (Canada); Watkins, Cynthia B [Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI (United States); Lindstrom, Mary J [Department of Biostatistics and Medical Informatics, University of Wisconsin, Madison, WI (United States); Harter, Josephine [Department of Pathology, University of Wisconsin, Madison, WI (United States); Sewall, Sarah [Department of Pathology, University of Wisconsin, Madison, WI (United States); Ogilvie, Travis [Department of Pathology, University of Calgary, Calgary, AB (Canada); Magliocco, Anthony [Department of Pathology, University of Calgary, Calgary, AB (Canada); Breslin, Tara M [Department of Surgery, University of Wisconsin, Madison, WI (United States); Temple, Walley [Department of Surgery and Oncology, University of Calgary, Calgary, AB (Canada); Mew, Daphne [Department of Surgery and Oncology, University of Calgary, Calgary, AB (Canada); Booske, John H [Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI (United States); Okoniewski, Michal [Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB (Canada); Hagness, Susan C [Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI (United States)

    2007-10-21

    The development of microwave breast cancer detection and treatment techniques has been driven by reports of substantial contrast in the dielectric properties of malignant and normal breast tissues. However, definitive knowledge of the dielectric properties of normal and diseased breast tissues at microwave frequencies has been limited by gaps and discrepancies across previously published studies. To address these issues, we conducted a large-scale study to experimentally determine the ultrawideband microwave dielectric properties of a variety of normal, malignant and benign breast tissues, measured from 0.5 to 20 GHz using a precision open-ended coaxial probe. Previously, we reported the dielectric properties of normal breast tissue samples obtained from reduction surgeries. Here, we report the dielectric properties of normal (adipose, glandular and fibroconnective), malignant (invasive and non-invasive ductal and lobular carcinomas) and benign (fibroadenomas and cysts) breast tissue samples obtained from cancer surgeries. We fit a one-pole Cole-Cole model to the complex permittivity data set of each characterized sample. Our analyses show that the contrast in the microwave-frequency dielectric properties between malignant and normal adipose-dominated tissues in the breast is considerable, as large as 10:1, while the contrast in the microwave-frequency dielectric properties between malignant and normal glandular/fibroconnective tissues in the breast is no more than about 10%.

  2. A large-scale study of the ultrawideband microwave dielectric properties of normal, benign and malignant breast tissues obtained from cancer surgeries

    Science.gov (United States)

    Lazebnik, Mariya; Popovic, Dijana; McCartney, Leah; Watkins, Cynthia B.; Lindstrom, Mary J.; Harter, Josephine; Sewall, Sarah; Ogilvie, Travis; Magliocco, Anthony; Breslin, Tara M.; Temple, Walley; Mew, Daphne; Booske, John H.; Okoniewski, Michal; Hagness, Susan C.

    2007-10-01

    The development of microwave breast cancer detection and treatment techniques has been driven by reports of substantial contrast in the dielectric properties of malignant and normal breast tissues. However, definitive knowledge of the dielectric properties of normal and diseased breast tissues at microwave frequencies has been limited by gaps and discrepancies across previously published studies. To address these issues, we conducted a large-scale study to experimentally determine the ultrawideband microwave dielectric properties of a variety of normal, malignant and benign breast tissues, measured from 0.5 to 20 GHz using a precision open-ended coaxial probe. Previously, we reported the dielectric properties of normal breast tissue samples obtained from reduction surgeries. Here, we report the dielectric properties of normal (adipose, glandular and fibroconnective), malignant (invasive and non-invasive ductal and lobular carcinomas) and benign (fibroadenomas and cysts) breast tissue samples obtained from cancer surgeries. We fit a one-pole Cole-Cole model to the complex permittivity data set of each characterized sample. Our analyses show that the contrast in the microwave-frequency dielectric properties between malignant and normal adipose-dominated tissues in the breast is considerable, as large as 10:1, while the contrast in the microwave-frequency dielectric properties between malignant and normal glandular/fibroconnective tissues in the breast is no more than about 10%.

  3. A large-scale study of the ultrawideband microwave dielectric properties of normal, benign and malignant breast tissues obtained from cancer surgeries

    International Nuclear Information System (INIS)

    Lazebnik, Mariya; Popovic, Dijana; McCartney, Leah; Watkins, Cynthia B; Lindstrom, Mary J; Harter, Josephine; Sewall, Sarah; Ogilvie, Travis; Magliocco, Anthony; Breslin, Tara M; Temple, Walley; Mew, Daphne; Booske, John H; Okoniewski, Michal; Hagness, Susan C

    2007-01-01

    The development of microwave breast cancer detection and treatment techniques has been driven by reports of substantial contrast in the dielectric properties of malignant and normal breast tissues. However, definitive knowledge of the dielectric properties of normal and diseased breast tissues at microwave frequencies has been limited by gaps and discrepancies across previously published studies. To address these issues, we conducted a large-scale study to experimentally determine the ultrawideband microwave dielectric properties of a variety of normal, malignant and benign breast tissues, measured from 0.5 to 20 GHz using a precision open-ended coaxial probe. Previously, we reported the dielectric properties of normal breast tissue samples obtained from reduction surgeries. Here, we report the dielectric properties of normal (adipose, glandular and fibroconnective), malignant (invasive and non-invasive ductal and lobular carcinomas) and benign (fibroadenomas and cysts) breast tissue samples obtained from cancer surgeries. We fit a one-pole Cole-Cole model to the complex permittivity data set of each characterized sample. Our analyses show that the contrast in the microwave-frequency dielectric properties between malignant and normal adipose-dominated tissues in the breast is considerable, as large as 10:1, while the contrast in the microwave-frequency dielectric properties between malignant and normal glandular/fibroconnective tissues in the breast is no more than about 10%

  4. Mobile Variable Depth Sampling System Design Study

    International Nuclear Information System (INIS)

    BOGER, R.M.

    2000-01-01

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study

  5. Mobile Variable Depth Sampling System Design Study

    Energy Technology Data Exchange (ETDEWEB)

    BOGER, R.M.

    2000-08-25

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study.

  6. The impact of study design and diagnostic approach in a large multi-centre ADHD study. Part 1: ADHD symptom patterns

    Directory of Open Access Journals (Sweden)

    Roeyers Herbert

    2011-04-01

    Full Text Available Abstract Background The International Multi-centre ADHD Genetics (IMAGE project with 11 participating centres from 7 European countries and Israel has collected a large behavioural and genetic database for present and future research. Behavioural data were collected from 1068 probands with the combined type of attention deficit/hyperactivity disorder (ADHD-CT and 1446 'unselected' siblings. The aim was to analyse the IMAGE sample with respect to demographic features (gender, age, family status, and recruiting centres and psychopathological characteristics (diagnostic subtype, symptom frequencies, age at symptom detection, and comorbidities. A particular focus was on the effects of the study design and the diagnostic procedure on the homogeneity of the sample in terms of symptom-based behavioural data, and potential consequences for further analyses based on these data. Methods Diagnosis was based on the Parental Account of Childhood Symptoms (PACS interview and the DSM-IV items of the Conners' teacher questionnaire. Demographics of the full sample and the homogeneity of a subsample (all probands were analysed by using robust statistical procedures which were adjusted for unequal sample sizes and skewed distributions. These procedures included multi-way analyses based on trimmed means and winsorised variances as well as bootstrapping. Results Age and proband/sibling ratios differed between participating centres. There was no significant difference in the distribution of gender between centres. There was a significant interaction between age and centre for number of inattentive, but not number of hyperactive symptoms. Higher ADHD symptom frequencies were reported by parents than teachers. The diagnostic symptoms differed from each other in their frequencies. The face-to-face interview was more sensitive than the questionnaire. The differentiation between ADHD-CT probands and unaffected siblings was mainly due to differences in hyperactive

  7. The impact of study design and diagnostic approach in a large multi-centre ADHD study. Part 1: ADHD symptom patterns.

    LENUS (Irish Health Repository)

    Muller, Ueli C

    2011-04-07

    Abstract Background The International Multi-centre ADHD Genetics (IMAGE) project with 11 participating centres from 7 European countries and Israel has collected a large behavioural and genetic database for present and future research. Behavioural data were collected from 1068 probands with the combined type of attention deficit\\/hyperactivity disorder (ADHD-CT) and 1446 \\'unselected\\' siblings. The aim was to analyse the IMAGE sample with respect to demographic features (gender, age, family status, and recruiting centres) and psychopathological characteristics (diagnostic subtype, symptom frequencies, age at symptom detection, and comorbidities). A particular focus was on the effects of the study design and the diagnostic procedure on the homogeneity of the sample in terms of symptom-based behavioural data, and potential consequences for further analyses based on these data. Methods Diagnosis was based on the Parental Account of Childhood Symptoms (PACS) interview and the DSM-IV items of the Conners\\' teacher questionnaire. Demographics of the full sample and the homogeneity of a subsample (all probands) were analysed by using robust statistical procedures which were adjusted for unequal sample sizes and skewed distributions. These procedures included multi-way analyses based on trimmed means and winsorised variances as well as bootstrapping. Results Age and proband\\/sibling ratios differed between participating centres. There was no significant difference in the distribution of gender between centres. There was a significant interaction between age and centre for number of inattentive, but not number of hyperactive symptoms. Higher ADHD symptom frequencies were reported by parents than teachers. The diagnostic symptoms differed from each other in their frequencies. The face-to-face interview was more sensitive than the questionnaire. The differentiation between ADHD-CT probands and unaffected siblings was mainly due to differences in hyperactive

  8. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  9. A direct observation method for auditing large urban centers using stratified sampling, mobile GIS technology and virtual environments.

    Science.gov (United States)

    Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth

    2017-02-16

    With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.

  10. Characteristics of Beverage Consumption Habits among a Large Sample of French Adults: Associations with Total Water and Energy Intakes

    Directory of Open Access Journals (Sweden)

    Fabien Szabo de Edelenyi

    2016-10-01

    Full Text Available Background: Adequate hydration is a key factor for correct functioning of both cognitive and physical processes. In France, public health recommendations about adequate total water intake (TWI only state that fluid intake should be sufficient, with particular attention paid to hydration for seniors, especially during heatwave periods. The objective of this study was to calculate the total amount of water coming from food and beverages and to analyse characteristics of consumption in participants from a large French national cohort. Methods: TWI, as well as contribution of food and beverages to TWI was assessed among 94,939 adult participants in the Nutrinet-Santé cohort (78% women, mean age 42.9 (SE 0.04 using three 24-h dietary records at baseline. Statistical differences in water intakes across age groups, seasons and day of the week were assessed. Results: The mean TWI was 2.3 L (Standard Error SE 4.7 for men and 2.1 L (SE 2.4 for women. A majority of the sample did comply with the European Food Safety Authority (EFSA adequate intake recommendation, especially women. Mean total energy intake (EI was 1884 kcal/day (SE 1.5 (2250 kcal/day (SE 3.6 for men and 1783 kcal/day (SE 1.5 for women. The contribution to the total EI from beverages was 8.3%. Water was the most consumed beverage, followed by hot beverages. The variety score, defined as the number of different categories of beverages consumed during the three 24-h records out of a maximum of 8, was positively correlated with TWI (r = 0.4; and with EI (r = 0.2, suggesting that beverage variety is an indicator of higher consumption of food and drinks. We found differences in beverage consumptions and water intakes according to age and seasonality. Conclusions: The present study gives an overview of the water intake characteristics in a large population of French adults. TWI was found to be globally in line with public health recommendations.

  11. Large area optical mapping of surface contact angle.

    Science.gov (United States)

    Dutra, Guilherme; Canning, John; Padden, Whayne; Martelli, Cicero; Dligatch, Svetlana

    2017-09-04

    Top-down contact angle measurements have been validated and confirmed to be as good if not more reliable than side-based measurements. A range of samples, including industrially relevant materials for roofing and printing, has been compared. Using the top-down approach, mapping in both 1-D and 2-D has been demonstrated. The method was applied to study the change in contact angle as a function of change in silver (Ag) nanoparticle size controlled by thermal evaporation. Large area mapping reveals good uniformity for commercial Aspen paper coated with black laser printer ink. A demonstration of the forensic and chemical analysis potential in 2-D is shown by uncovering the hidden CsF initials made with mineral oil on the coated Aspen paper. The method promises to revolutionize nanoscale characterization and industrial monitoring as well as chemical analyses by allowing rapid contact angle measurements over large areas or large numbers of samples in ways and times that have not been possible before.

  12. Implicit and Explicit Weight Bias in a National Sample of 4732 Medical Students: The Medical Student CHANGES Study

    OpenAIRE

    Phelan, Sean M.; Dovidio, John F.; Puhl, Rebecca M.; Burgess, Diana J.; Nelson, David B.; Yeazel, Mark W.; Hardeman, Rachel; Perry, Sylvia; van Ryn, Michelle

    2014-01-01

    Objective To examine the magnitude of explicit and implicit weight biases compared to biases against other groups; and identify student factors predicting bias in a large national sample of medical students. Design and Methods A web-based survey was completed by 4732 1st year medical students from 49 medical schools as part of a longitudinal study of medical education. The survey included a validated measure of implicit weight bias, the implicit association test, and 2 measures of explicit bi...

  13. Pilot studies for the North American Soil Geochemical Landscapes Project - Site selection, sampling protocols, analytical methods, and quality control protocols

    Science.gov (United States)

    Smith, D.B.; Woodruff, L.G.; O'Leary, R. M.; Cannon, W.F.; Garrett, R.G.; Kilburn, J.E.; Goldhaber, M.B.

    2009-01-01

    In 2004, the US Geological Survey (USGS) and the Geological Survey of Canada sampled and chemically analyzed soils along two transects across Canada and the USA in preparation for a planned soil geochemical survey of North America. This effort was a pilot study to test and refine sampling protocols, analytical methods, quality control protocols, and field logistics for the continental survey. A total of 220 sample sites were selected at approximately 40-km intervals along the two transects. The ideal sampling protocol at each site called for a sample from a depth of 0-5 cm and a composite of each of the O, A, and C horizons. The Ca, Fe, K, Mg, Na, S, Ti, Ag, As, Ba, Be, Bi, Cd, Ce, Co, Cr, Cs, Cu, Ga, In, La, Li, Mn, Mo, Nb, Ni, P, Pb, Rb, Sb, Sc, Sn, Sr, Te, Th, Tl, U, V, W, Y, and Zn by inductively coupled plasma-mass spectrometry and inductively coupled plasma-atomic emission spectrometry following a near-total digestion in a mixture of HCl, HNO3, HClO4, and HF. Separate methods were used for Hg, Se, total C, and carbonate-C on this same size fraction. Only Ag, In, and Te had a large percentage of concentrations below the detection limit. Quality control (QC) of the analyses was monitored at three levels: the laboratory performing the analysis, the USGS QC officer, and the principal investigator for the study. This level of review resulted in an average of one QC sample for every 20 field samples, which proved to be minimally adequate for such a large-scale survey. Additional QC samples should be added to monitor within-batch quality to the extent that no more than 10 samples are analyzed between a QC sample. Only Cr (77%), Y (82%), and Sb (80%) fell outside the acceptable limits of accuracy (% recovery between 85 and 115%) because of likely residence in mineral phases resistant to the acid digestion. A separate sample of 0-5-cm material was collected at each site for determination of organic compounds. A subset of 73 of these samples was analyzed for a suite of

  14. Proxy Graph: Visual Quality Metrics of Big Graph Sampling.

    Science.gov (United States)

    Nguyen, Quan Hoang; Hong, Seok-Hee; Eades, Peter; Meidiana, Amyra

    2017-06-01

    Data sampling has been extensively studied for large scale graph mining. Many analyses and tasks become more efficient when performed on graph samples of much smaller size. The use of proxy objects is common in software engineering for analysis and interaction with heavy objects or systems. In this paper, we coin the term 'proxy graph' and empirically investigate how well a proxy graph visualization can represent a big graph. Our investigation focuses on proxy graphs obtained by sampling; this is one of the most common proxy approaches. Despite the plethora of data sampling studies, this is the first evaluation of sampling in the context of graph visualization. For an objective evaluation, we propose a new family of quality metrics for visual quality of proxy graphs. Our experiments cover popular sampling techniques. Our experimental results lead to guidelines for using sampling-based proxy graphs in visualization.

  15. Evaluation of Respondent-Driven Sampling

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling

  16. Evaluation of respondent-driven sampling.

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required

  17. Use of space-filling curves to select sample locations in natural resource monitoring studies

    Science.gov (United States)

    Andrew Lister; Charles T. Scott

    2009-01-01

    The establishment of several large area monitoring networks over the past few decades has led to increased research into ways to spatially balance sample locations across the landscape. Many of these methods are well documented and have been used in the past with great success. In this paper, we present a method using geographic information systems (GIS) and fractals...

  18. Acute Effect of Alcohol Intake on Cardiovascular Autonomic Regulation During the First Hours of Sleep in a Large Real-World Sample of Finnish Employees: Observational Study.

    Science.gov (United States)

    Pietilä, Julia; Helander, Elina; Korhonen, Ilkka; Myllymäki, Tero; Kujala, Urho M; Lindholm, Harri

    2018-03-16

    Sleep is fundamental for good health, and poor sleep has been associated with negative health outcomes. Alcohol consumption is a universal health behavior associated with poor sleep. In controlled laboratory studies, alcohol intake has been shown to alter physiology and disturb sleep homeostasis and architecture. The association between acute alcohol intake and physiological changes has not yet been studied in noncontrolled real-world settings. The aim of this study was to assess the effects of alcohol intake on the autonomic nervous system (ANS) during sleep in a large noncontrolled sample of Finnish employees. From a larger cohort, this study included 4098 subjects (55.81%, 2287/4098 females; mean age 45.1 years) who had continuous beat-to-beat R-R interval recordings of good quality for at least 1 day with and for at least 1 day without alcohol intake. The participants underwent continuous beat-to-beat R-R interval recording during their normal everyday life and self-reported their alcohol intake as doses for each day. Heart rate (HR), HR variability (HRV), and HRV-derived indices of physiological state from the first 3 hours of sleep were used as outcomes. Within-subject analyses were conducted in a repeated measures manner by studying the differences in the outcomes between each participant's days with and without alcohol intake. For repeated measures two-way analysis of variance, the participants were divided into three groups: low (≤0.25 g/kg), moderate (>0.25-0.75 g/kg), and high (>0.75 g/kg) intake of pure alcohol. Moreover, linear models studied the differences in outcomes with respect to the amount of alcohol intake and the participant's background parameters (age; gender; body mass index, BMI; physical activity, PA; and baseline sleep HR). Alcohol intake was dose-dependently associated with increased sympathetic regulation, decreased parasympathetic regulation, and insufficient recovery. In addition to moderate and high alcohol doses, the

  19. Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey

    Directory of Open Access Journals (Sweden)

    Steven R. Corman

    2013-12-01

    Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.

  20. Tissue Sampling Guides for Porcine Biomedical Models.

    Science.gov (United States)

    Albl, Barbara; Haesner, Serena; Braun-Reichhart, Christina; Streckel, Elisabeth; Renner, Simone; Seeliger, Frank; Wolf, Eckhard; Wanke, Rüdiger; Blutke, Andreas

    2016-04-01

    This article provides guidelines for organ and tissue sampling adapted to porcine animal models in translational medical research. Detailed protocols for the determination of sampling locations and numbers as well as recommendations on the orientation, size, and trimming direction of samples from ∼50 different porcine organs and tissues are provided in the Supplementary Material. The proposed sampling protocols include the generation of samples suitable for subsequent qualitative and quantitative analyses, including cryohistology, paraffin, and plastic histology; immunohistochemistry;in situhybridization; electron microscopy; and quantitative stereology as well as molecular analyses of DNA, RNA, proteins, metabolites, and electrolytes. With regard to the planned extent of sampling efforts, time, and personnel expenses, and dependent upon the scheduled analyses, different protocols are provided. These protocols are adjusted for (I) routine screenings, as used in general toxicity studies or in analyses of gene expression patterns or histopathological organ alterations, (II) advanced analyses of single organs/tissues, and (III) large-scale sampling procedures to be applied in biobank projects. Providing a robust reference for studies of porcine models, the described protocols will ensure the efficiency of sampling, the systematic recovery of high-quality samples representing the entire organ or tissue as well as the intra-/interstudy comparability and reproducibility of results. © The Author(s) 2016.

  1. Large Country-Lot Quality Assurance Sampling : A New Method for Rapid Monitoring and Evaluation of Health, Nutrition and Population Programs at Sub-National Levels

    OpenAIRE

    Hedt, Bethany L.; Olives, Casey; Pagano, Marcello; Valadez, Joseph J.

    2008-01-01

    Sampling theory facilitates development of economical, effective and rapid measurement of a population. While national policy maker value survey results measuring indicators representative of a large area (a country, state or province), measurement in smaller areas produces information useful for managers at the local level. It is often not possible to disaggregate a national survey to obt...

  2. HAMMER: Reweighting tool for simulated data samples

    CERN Document Server

    Duell, Stephan; Ligeti, Zoltan; Papucci, Michele; Robinson, Dean

    2016-01-01

    Modern flavour physics experiments, such as Belle II or LHCb, require large samples of generated Monte Carlo events. Monte Carlo events often are processed in a sophisticated chain that includes a simulation of the detector response. The generation and reconstruction of large samples is resource-intensive and in principle would need to be repeated if e.g. parameters responsible for the underlying models change due to new measurements or new insights. To avoid having to regenerate large samples, we work on a tool, The Helicity Amplitude Module for Matrix Element Reweighting (HAMMER), which allows one to easily reweight existing events in the context of semileptonic b → q ` ̄ ν ` analyses to new model parameters or new physics scenarios.

  3. Prevalence of DSM-IV and DSM-5 Alcohol, Cocaine, Opioid, and Cannabis Use Disorders in a Largely Substance Dependent Sample

    Science.gov (United States)

    Peer, Kyle; Rennert, Lior; Lynch, Kevin G.; Farrer, Lindsay; Gelernter, Joel; Kranzler, Henry R.

    2012-01-01

    BACKGROUND The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) will soon replace the DSM-IV, which has existed for nearly two decades. The changes in diagnostic criteria have important implications for research and for the clinical care of individuals with Substance Use Disorders (SUDs). METHODS We used the Semi-Structured Assessment for Drug Dependence and Alcoholism to evaluate the lifetime presence of DSM-IV abuse and dependence diagnoses and DSM-5 mild, moderate, or severe SUDs for alcohol, cocaine, opioids, and cannabis in a sample of 7,543 individuals recruited to participate in genetic studies of substance dependence. RESULTS Switches between diagnostic systems consistently resulted in a modestly greater prevalence for DSM-5 SUDs, based largely on the assignment of DSM-5 diagnoses to DSM-IV “diagnostic ophans” (i.e., individuals meeting one or two criteria for dependence and none for abuse, and thus not receiving a DSM-IV SUD diagnosis). The vast majority of these diagnostic switches were attributable to the requirement that only two of 11 criteria be met for a DSM-5 SUD diagnosis. We found evidence to support the omission from DSM-5 of the legal criterion due to its limited diagnostic utility. The addition of craving as a criterion in DSM-5 did not substantially affect the likelihood of an SUD diagnosis. CONCLUSION The greatest advantage of DSM-5 appears to be its ability to capture diagnostic orphans. In this sample, changes reflected in DSM-5 had a minimal impact on the prevalence of SUD diagnoses. PMID:22884164

  4. Solid phase extraction of large volume of water and beverage samples to improve detection limits for GC-MS analysis of bisphenol A and four other bisphenols.

    Science.gov (United States)

    Cao, Xu-Liang; Popovic, Svetlana

    2018-01-01

    Solid phase extraction (SPE) of large volumes of water and beverage products was investigated for the GC-MS analysis of bisphenol A (BPA), bisphenol AF (BPAF), bisphenol F (BPF), bisphenol E (BPE), and bisphenol B (BPB). While absolute recoveries of the method were improved for water and some beverage products (e.g. diet cola, iced tea), breakthrough may also have occurred during SPE of 200 mL of other beverages (e.g. BPF in cola). Improvements in method detection limits were observed with the analysis of large sample volumes for all bisphenols at ppt (pg/g) to sub-ppt levels. This improvement was found to be proportional to sample volumes for water and beverage products with less interferences and noise levels around the analytes. Matrix effects and interferences were observed during SPE of larger volumes (100 and 200 mL) of the beverage products, and affected the accurate analysis of BPF. This improved method was used to analyse bisphenols in various beverage samples, and only BPA was detected, with levels ranging from 0.022 to 0.030 ng/g for products in PET bottles, and 0.085 to 0.32 ng/g for products in cans.

  5. Sampling methods and non-destructive examination techniques for large radioactive waste packages

    International Nuclear Information System (INIS)

    Green, T.H.; Smith, D.L.; Burgoyne, K.E.; Maxwell, D.J.; Norris, G.H.; Billington, D.M.; Pipe, R.G.; Smith, J.E.; Inman, C.M.

    1992-01-01

    Progress is reported on work undertaken to evaluate quality checking methods for radioactive wastes. A sampling rig was designed, fabricated and used to develop techniques for the destructive sampling of cemented simulant waste using remotely operated equipment. An engineered system for the containment of cooling water was designed and manufactured and successfully demonstrated with the drum and coring equipment mounted in both vertical and horizontal orientations. The preferred in-cell orientation was found to be with the drum and coring machinery mounted in a horizontal position. Small powdered samples can be taken from cemented homogeneous waste cores using a hollow drill/vacuum section technique with the preferred subsampling technique being to discard the outer 10 mm layer to obtain a representative sample of the cement core. Cement blends can be dissolved using fusion techniques and the resulting solutions are stable to gelling for periods in excess of one year. Although hydrochloric acid and nitric acid are promising solvents for dissolution of cement blends, the resultant solutions tend to form silicic acid gels. An estimate of the beta-emitter content of cemented waste packages can be obtained by a combination of non-destructive and destructive techniques. The errors will probably be in excess of +/-60 % at the 95 % confidence level. Real-time X-ray video-imaging techniques have been used to analyse drums of uncompressed, hand-compressed, in-drum compacted and high-force compacted (i.e. supercompacted) simulant waste. The results have confirmed the applicability of this technique for NDT of low-level waste. 8 refs., 12 figs., 3 tabs

  6. Phenomenology of OCD: lessons from a large multicenter study and implications for ICD-11.

    Science.gov (United States)

    Shavitt, Roseli G; de Mathis, Maria Alice; Oki, Fábio; Ferrao, Ygor A; Fontenelle, Leonar