WorldWideScience

Sample records for samples show large

  1. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  2. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  3. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  4. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  5. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  6. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  7. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  8. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  9. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  10. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  11. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  12. Payload specialist Reinhard Furrer show evidence of previous blood sampling

    Science.gov (United States)

    1985-01-01

    Payload specialist Reinhard Furrer shows evidence of previous blood sampling while Wubbo J. Ockels, Dutch payload specialist (only partially visible), extends his right arm after a sample has been taken. Both men show bruises on their arms.

  13. Procedure for plutonium analysis of large (100g) soil and sediment samples

    International Nuclear Information System (INIS)

    Meadows, J.W.T.; Schweiger, J.S.; Mendoza, B.; Stone, R.

    1975-01-01

    A method for the complete dissolution of large soil or sediment samples is described. This method is in routine usage at Lawrence Livermore Laboratory for the analysis of fall-out levels of Pu in soils and sediments. Intercomparison with partial dissolution (leach) techniques shows the complete dissolution method to be superior for the determination of plutonium in a wide variety of environmental samples. (author)

  14. Fast concentration of dissolved forms of cesium radioisotopes from large seawater samples

    International Nuclear Information System (INIS)

    Jan Kamenik; Henrieta Dulaiova; Ferdinand Sebesta; Kamila St'astna; Czech Technical University, Prague

    2013-01-01

    The method developed for cesium concentration from large freshwater samples was tested and adapted for analysis of cesium radionuclides in seawater. Concentration of dissolved forms of cesium in large seawater samples (about 100 L) was performed using composite absorbers AMP-PAN and KNiFC-PAN with ammonium molybdophosphate and potassium–nickel hexacyanoferrate(II) as active components, respectively, and polyacrylonitrile as a binding polymer. A specially designed chromatography column with bed volume (BV) 25 mL allowed fast flow rates of seawater (up to 1,200 BV h -1 ). The recovery yields were determined by ICP-MS analysis of stable cesium added to seawater sample. Both absorbers proved usability for cesium concentration from large seawater samples. KNiFC-PAN material was slightly more effective in cesium concentration from acidified seawater (recovery yield around 93 % for 700 BV h -1 ). This material showed similar efficiency in cesium concentration also from natural seawater. The activity concentrations of 137 Cs determined in seawater from the central Pacific Ocean were 1.5 ± 0.1 and 1.4 ± 0.1 Bq m -3 for an offshore (January 2012) and a coastal (February 2012) locality, respectively, 134 Cs activities were below detection limit ( -3 ). (author)

  15. Analysis of large soil samples for actinides

    Science.gov (United States)

    Maxwell, III; Sherrod, L [Aiken, SC

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  16. A spinner magnetometer for large Apollo lunar samples

    Science.gov (United States)

    Uehara, M.; Gattacceca, J.; Quesnel, Y.; Lepaulard, C.; Lima, E. A.; Manfredi, M.; Rochette, P.

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10-7 Am2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  17. A spinner magnetometer for large Apollo lunar samples.

    Science.gov (United States)

    Uehara, M; Gattacceca, J; Quesnel, Y; Lepaulard, C; Lima, E A; Manfredi, M; Rochette, P

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10 -7 Am 2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  18. Study of a large rapid ashing apparatus and a rapid dry ashing method for biological samples and its application

    International Nuclear Information System (INIS)

    Jin Meisun; Wang Benli; Liu Wencang

    1988-04-01

    A large rapid-dry-ashing apparatus and a rapid ashing method for biological samples are described. The apparatus consists of specially made ashing furnace, gas supply system and temperature-programming control cabinet. The following adventages have been showed by ashing experiment with the above apparatus: (1) high speed of ashing and saving of electric energy; (2) The apparatus can ash a large amount of samples at a time; (3) The ashed sample is pure white (or spotless), loose and easily soluble with few content of residual char; (4) The fresh sample can also be ashed directly. The apparatus is suitable for ashing a large amount of the environmental samples containing low level radioactivity trace elements and the medical, food and agricultural research samples

  19. Large Sample Neutron Activation Analysis: A Challenge in Cultural Heritage Studies

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Tzika, F.

    2007-01-01

    Large sample neutron activation analysis compliments and significantly extends the analytical tools available for cultural heritage and authentication studies providing unique applications of non-destructive, multi-element analysis of materials that are too precious to damage for sampling purposes, representative sampling of heterogeneous materials or even analysis of whole objects. In this work, correction factors for neutron self-shielding, gamma-ray attenuation and volume distribution of the activity in large volume samples composed of iron and ceramic material were derived. Moreover, the effect of inhomogeneity on the accuracy of the technique was examined

  20. Development of Large Sample Neutron Activation Technique for New Applications in Thailand

    International Nuclear Information System (INIS)

    Laoharojanaphand, S.; Tippayakul, C.; Wonglee, S.; Channuie, J.

    2018-01-01

    The development of the Large Sample Neutron Activation Analysis (LSNAA) in Thailand is presented in this paper. The technique had been firstly developed with rice sample as the test subject. The Thai Research Reactor-1/Modification 1 (TRR-1/M1) was used as the neutron source. The first step was to select and characterize an appropriate irradiation facility for the research. An out-core irradiation facility (A4 position) was first attempted. The results performed with the A4 facility were then used as guides for the subsequent experiments with the thermal column facility. The characterization of the thermal column was performed with Cu-wire to determine spatial distribution without and with rice sample. The flux depression without rice sample was observed to be less than 30% while the flux depression with rice sample increased to within 60%. The flux monitors internal to the rice sample were used to determine average flux over the rice sample. The gamma selfshielding effect during gamma measurement was corrected using the Monte Carlo simulation. The ratio between the efficiencies of the volume source and the point source for each energy point was calculated by the MCNPX code. The research team adopted the k0-NAA methodology to calculate the element concentration in the research. The k0-NAA program which developed by IAEA was set up to simulate the conditions of the irradiation and measurement facilities used in this research. The element concentrations in the bulk rice sample were then calculated taking into account the flux depression and gamma efficiency corrections. At the moment, the results still show large discrepancies with the reference values. However, more research on the validation will be performed to identify sources of errors. Moreover, this LS-NAA technique was introduced for the activation analysis of the IAEA archaeological mock-up. The results are provided in this report. (author)

  1. A review of methods for sampling large airborne particles and associated radioactivity

    International Nuclear Information System (INIS)

    Garland, J.A.; Nicholson, K.W.

    1990-01-01

    Radioactive particles, tens of μm or more in diameter, are unlikely to be emitted directly from nuclear facilities with exhaust gas cleansing systems, but may arise in the case of an accident or where resuspension from contaminated surfaces is significant. Such particles may dominate deposition and, according to some workers, may contribute to inhalation doses. Quantitative sampling of large airborne particles is difficult because of their inertia and large sedimentation velocities. The literature describes conditions for unbiased sampling and the magnitude of sampling errors for idealised sampling inlets in steady winds. However, few air samplers for outdoor use have been assessed for adequacy of sampling. Many size selective sampling methods are found in the literature but few are suitable at the low concentrations that are often encountered in the environment. A number of approaches for unbiased sampling of large particles have been found in the literature. Some are identified as meriting further study, for application in the measurement of airborne radioactivity. (author)

  2. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  3. Feasibility studies on large sample neutron activation analysis using a low power research reactor

    International Nuclear Information System (INIS)

    Gyampo, O.

    2008-06-01

    Instrumental neutron activation analysis (INAA) using Ghana Research Reactor-1 (GHARR-1) can be directly applied to samples with masses in grams. Samples weights were in the range of 0.5g to 5g. Therefore, the representativity of the sample is improved as well as sensitivity. Irradiation of samples was done using a low power research reactor. The correction for the neutron self-shielding within the sample is determined from measurement of the neutron flux depression just outside the sample. Correction for gamma ray self-attenuation in the sample was performed via linear attenuation coefficients derived from transmission measurements. Quantitative and qualitative analysis of data were done using gamma ray spectrometry (HPGe detector). The results of this study on the possibilities of large sample NAA using a miniature neutron source reactor (MNSR) show clearly that the Ghana Research Reactor-1 (GHARR-1) at the National Nuclear Research Institute (NNRI) can be used for sample analyses up to 5 grams (5g) using the pneumatic transfer systems.

  4. 105-DR Large Sodium Fire Facility decontamination, sampling, and analysis plan

    International Nuclear Information System (INIS)

    Knaus, Z.C.

    1995-01-01

    This is the decontamination, sampling, and analysis plan for the closure activities at the 105-DR Large Sodium Fire Facility at Hanford Reservation. This document supports the 105-DR Large Sodium Fire Facility Closure Plan, DOE-RL-90-25. The 105-DR LSFF, which operated from about 1972 to 1986, was a research laboratory that occupied the former ventilation supply room on the southwest side of the 105-DR Reactor facility in the 100-D Area of the Hanford Site. The LSFF was established to investigate fire fighting and safety associated with alkali metal fires in the liquid metal fast breeder reactor facilities. The decontamination, sampling, and analysis plan identifies the decontamination procedures, sampling locations, any special handling requirements, quality control samples, required chemical analysis, and data validation needed to meet the requirements of the 105-DR Large Sodium Fire Facility Closure Plan in compliance with the Resource Conservation and Recovery Act

  5. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  6. Tracing the trajectory of skill learning with a very large sample of online game players.

    Science.gov (United States)

    Stafford, Tom; Dewar, Michael

    2014-02-01

    In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.

  7. Large sample NAA facility and methodology development

    International Nuclear Information System (INIS)

    Roth, C.; Gugiu, D.; Barbos, D.; Datcu, A.; Aioanei, L.; Dobrea, D.; Taroiu, I. E.; Bucsa, A.; Ghinescu, A.

    2013-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) facility has been developed at the TRIGA- Annular Core Pulsed Reactor (ACPR) operated by the Institute for Nuclear Research in Pitesti, Romania. The central irradiation cavity of the ACPR core can accommodate a large irradiation device. The ACPR neutron flux characteristics are well known and spectrum adjustment techniques have been successfully applied to enhance the thermal component of the neutron flux in the central irradiation cavity. An analysis methodology was developed by using the MCNP code in order to estimate counting efficiency and correction factors for the major perturbing phenomena. Test experiments, comparison with classical instrumental neutron activation analysis (INAA) methods and international inter-comparison exercise have been performed to validate the new methodology. (authors)

  8. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  9. Utilization of AHWR critical facility for research and development work on large sample NAA

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Reddy, A.V.R.; Verma, S.K.; De, S.K.

    2014-01-01

    The graphite reflector position of AHWR critical facility (CF) was utilized for analysis of large size (g-kg scale) samples using internal mono standard neutron activation analysis (IM-NAA). The reactor position was characterized by cadmium ratio method using In monitor for total flux and sub cadmium to epithermal flux ratio (f). Large sample neutron activation analysis (LSNAA) work was carried out for samples of stainless steel, ancient and new clay potteries and dross. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated. Radioactive assay was carried out using high resolution gamma ray spectrometry. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. Concentrations of Au and Ag were determined in not so homogeneous three large size samples of dross. An X-Z rotary scanning unit has been installed for counting large and not so homogeneous samples. (author)

  10. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  11. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  12. 105-DR Large sodium fire facility soil sampling data evaluation report

    International Nuclear Information System (INIS)

    Adler, J.G.

    1996-01-01

    This report evaluates the soil sampling activities, soil sample analysis, and soil sample data associated with the closure activities at the 105-DR Large Sodium Fire Facility. The evaluation compares these activities to the regulatory requirements for meeting clean closure. The report concludes that there is no soil contamination from the waste treatment activities

  13. Associations between sociodemographic, sampling and health factors and various salivary cortisol indicators in a large sample without psychopathology

    NARCIS (Netherlands)

    Vreeburg, Sophie A.; Kruijtzer, Boudewijn P.; van Pelt, Johannes; van Dyck, Richard; DeRijk, Roel H.; Hoogendijk, Witte J. G.; Smit, Johannes H.; Zitman, Frans G.; Penninx, Brenda

    Background: Cortisol levels are increasingly often assessed in large-scale psychosomatic research. Although determinants of different salivary cortisol indicators have been described, they have not yet been systematically studied within the same study with a Large sample size. Sociodemographic,

  14. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  15. Field sampling, preparation procedure and plutonium analyses of large freshwater samples

    International Nuclear Information System (INIS)

    Straelberg, E.; Bjerk, T.O.; Oestmo, K.; Brittain, J.E.

    2002-01-01

    This work is part of an investigation of the mobility of plutonium in freshwater systems containing humic substances. A well-defined bog-stream system located in the catchment area of a subalpine lake, Oevre Heimdalsvatn, Norway, is being studied. During the summer of 1999, six water samples were collected from the tributary stream Lektorbekken and the lake itself. However, the analyses showed that the plutonium concentration was below the detection limit in all the samples. Therefore renewed sampling at the same sites was carried out in August 2000. The results so far are in agreement with previous analyses from the Heimdalen area. However, 100 times higher concentrations are found in the lowlands in the eastern part of Norway. The reason for this is not understood, but may be caused by differences in the concentrations of humic substances and/or the fact that the mountain areas are covered with snow for a longer period of time every year. (LN)

  16. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  17. Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging

    Science.gov (United States)

    Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.

    2017-08-01

    Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.

  18. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  19. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  20. An open-flow pulse ionization chamber for alpha spectrometry of large-area samples

    International Nuclear Information System (INIS)

    Johansson, L.; Roos, B.; Samuelsson, C.

    1992-01-01

    The presented open-flow pulse ionization chamber was developed to make alpha spectrometry on large-area surfaces easy. One side of the chamber is left open, where the sample is to be placed. The sample acts as a chamber wall and therby defeins the detector volume. The sample area can be as large as 400 cm 2 . To prevent air from entering the volume there is a constant gas flow through the detector, coming in at the bottom of the chamber and leaking at the sides of the sample. The method results in good energy resolution and has considerable applicability in the retrospective radon research. Alpha spectra obtained in the retrospective measurements descend from 210 Po, built up in the sample from the radon daughters recoiled into a glass surface. (au)

  1. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    Science.gov (United States)

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  2. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Gene coexpression measures in large heterogeneous samples using count statistics.

    Science.gov (United States)

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  4. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  5. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  6. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  7. A large-scale cryoelectronic system for biological sample banking

    Science.gov (United States)

    Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.

    2009-11-01

    We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.

  8. Automated, feature-based image alignment for high-resolution imaging mass spectrometry of large biological samples

    NARCIS (Netherlands)

    Broersen, A.; Liere, van R.; Altelaar, A.F.M.; Heeren, R.M.A.; McDonnell, L.A.

    2008-01-01

    High-resolution imaging mass spectrometry of large biological samples is the goal of several research groups. In mosaic imaging, the most common method, the large sample is divided into a mosaic of small areas that are then analyzed with high resolution. Here we present an automated alignment

  9. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  10. Absolute activity determinations on large volume geological samples independent of self-absorption effects

    International Nuclear Information System (INIS)

    Wilson, W.E.

    1980-01-01

    This paper describes a method for measuring the absolute activity of large volume samples by γ-spectroscopy independent of self-absorption effects using Ge detectors. The method yields accurate matrix independent results at the expense of replicative counting of the unknown sample. (orig./HP)

  11. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  12. 17 CFR Appendix B to Part 420 - Sample Large Position Report

    Science.gov (United States)

    2010-04-01

    ..., and as collateral for financial derivatives and other securities transactions $ Total Memorandum 1... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Sample Large Position Report B Appendix B to Part 420 Commodity and Securities Exchanges DEPARTMENT OF THE TREASURY REGULATIONS UNDER...

  13. Large area synchrotron X-ray fluorescence mapping of biological samples

    International Nuclear Information System (INIS)

    Kempson, I.; Thierry, B.; Smith, E.; Gao, M.; De Jonge, M.

    2014-01-01

    Large area mapping of inorganic material in biological samples has suffered severely from prohibitively long acquisition times. With the advent of new detector technology we can now generate statistically relevant information for studying cell populations, inter-variability and bioinorganic chemistry in large specimen. We have been implementing ultrafast synchrotron-based XRF mapping afforded by the MAIA detector for large area mapping of biological material. For example, a 2.5 million pixel map can be acquired in 3 hours, compared to a typical synchrotron XRF set-up needing over 1 month of uninterrupted beamtime. Of particular focus to us is the fate of metals and nanoparticles in cells, 3D tissue models and animal tissues. The large area scanning has for the first time provided statistically significant information on sufficiently large numbers of cells to provide data on intercellular variability in uptake of nanoparticles. Techniques such as flow cytometry generally require analysis of thousands of cells for statistically meaningful comparison, due to the large degree of variability. Large area XRF now gives comparable information in a quantifiable manner. Furthermore, we can now image localised deposition of nanoparticles in tissues that would be highly improbable to 'find' by typical XRF imaging. In addition, the ultra fast nature also makes it viable to conduct 3D XRF tomography over large dimensions. This technology avails new opportunities in biomonitoring and understanding metal and nanoparticle fate ex-vivo. Following from this is extension to molecular imaging through specific anti-body targeted nanoparticles to label specific tissues and monitor cellular process or biological consequence

  14. Sampling of charged liquid radwaste stored in large tanks

    International Nuclear Information System (INIS)

    Tchemitcheff, E.; Domage, M.; Bernard-Bruls, X.

    1995-01-01

    The final safe disposal of radwaste, in France and elsewhere, entails, for liquid effluents, their conversion to a stable solid form, hence implying their conditioning. The production of conditioned waste with the requisite quality, traceability of the characteristics of the packages produced, and safe operation of the conditioning processes, implies at least the accurate knowledge of the chemical and radiochemical properties of the effluents concerned. The problem in sampling the normally charged effluents is aggravated for effluents that have been stored for several years in very large tanks, without stirring and retrieval systems. In 1992, SGN was asked by Cogema to study the retrieval and conditioning of LL/ML chemical sludge and spent ion-exchange resins produced in the operation of the UP2 400 plant at La Hague, and stored temporarily in rectangular silos and tanks. The sampling aspect was crucial for validating the inventories, identifying the problems liable to arise in the aging of the effluents, dimensioning the retrieval systems and checking the transferability and compatibility with the downstream conditioning process. Two innovative self-contained systems were developed and built for sampling operations, positioned above the tanks concerned. Both systems have been operated in active conditions and have proved totally satisfactory for taking representative samples. Today SGN can propose industrially proven overall solutions, adaptable to the various constraints of many spent fuel cycle operators

  15. Large magnitude gridded ionization chamber for impurity identification in alpha emitting radioactive samples

    International Nuclear Information System (INIS)

    Santos, R.N. dos.

    1992-01-01

    This paper refers to a large magnitude gridded ionization chamber with high resolution used in the identification of α radioactive samples. The chamber and the electrode have been described in terms of their geometry and dimensions, as well as the best results listed accordingly. Several α emitting radioactive samples were used with a gas mixture of 90% Argon plus 10% Methane. We got α energy spectrum with resolution around 22,14 KeV in agreement to the best results available in the literature. The spectrum of α energy related to 92 U 233 was gotten using the ionization chamber mentioned in this work; several values were found which matched perfectly well adjustment curve of the chamber. Many other additional measures using different kinds of adjusted detectors were successfully obtained in order to confirm the results gotten in the experiments, thus leading to the identification of some elements of the 92 U 233 radioactive series. Such results show the possibility of using the chamber mentioned for measurements of α low activity contamination. (author)

  16. CO2 isotope analyses using large air samples collected on intercontinental flights by the CARIBIC Boeing 767

    NARCIS (Netherlands)

    Assonov, S.S.; Brenninkmeijer, C.A.M.; Koeppel, C.; Röckmann, T.

    2009-01-01

    Analytical details for 13C and 18O isotope analyses of atmospheric CO2 in large air samples are given. The large air samples of nominally 300 L were collected during the passenger aircraft-based atmospheric chemistry research project CARIBIC and analyzed for a large number of trace gases and

  17. Superwind Outflows in Seyfert Galaxies? : Large-Scale Radio Maps of an Edge-On Sample

    Science.gov (United States)

    Colbert, E.; Gallimore, J.; Baum, S.; O'Dea, C.

    1995-03-01

    Large-scale galactic winds (superwinds) are commonly found flowing out of the nuclear region of ultraluminous infrared and powerful starburst galaxies. Stellar winds and supernovae from the nuclear starburst provide the energy to drive these superwinds. The outflowing gas escapes along the rotation axis, sweeping up and shock-heating clouds in the halo, which produces optical line emission, radio synchrotron emission, and X-rays. These features can most easily be studied in edge-on systems, so that the wind emission is not confused by that from the disk. We have begun a systematic search for superwind outflows in Seyfert galaxies. In an earlier optical emission-line survey, we found extended minor axis emission and/or double-peaked emission line profiles in >~30% of the sample objects. We present here large-scale (6cm VLA C-config) radio maps of 11 edge-on Seyfert galaxies, selected (without bias) from a distance-limited sample of 23 edge-on Seyferts. These data have been used to estimate the frequency of occurrence of superwinds. Preliminary results indicate that four (36%) of the 11 objects observed and six (26%) of the 23 objects in the distance-limited sample have extended radio emission oriented perpendicular to the galaxy disk. This emission may be produced by a galactic wind blowing out of the disk. Two (NGC 2992 and NGC 5506) of the nine objects for which we have both radio and optical data show good evidence for a galactic wind in both datasets. We suggest that galactic winds occur in >~30% of all Seyferts. A goal of this work is to find a diagnostic that can be used to distinguish between large-scale outflows that are driven by starbursts and those that are driven by an AGN. The presence of starburst-driven superwinds in Seyferts, if established, would have important implications for the connection between starburst galaxies and AGN.

  18. The problem of large samples. An activation analysis study of electronic waste material

    International Nuclear Information System (INIS)

    Segebade, C.; Goerner, W.; Bode, P.

    2007-01-01

    Large-volume instrumental photon activation analysis (IPAA) was used for the investigation of shredded electronic waste material. Sample masses from 1 to 150 grams were analyzed to obtain an estimate of the minimum sample size to be taken to achieve a representativeness of the results which is satisfactory for a defined investigation task. Furthermore, the influence of irradiation and measurement parameters upon the quality of the analytical results were studied. Finally, the analytical data obtained from IPAA and instrumental neutron activation analysis (INAA), both carried out in a large-volume mode, were compared. Only parts of the values were found in satisfactory agreement. (author)

  19. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  20. Large sample neutron activation analysis: establishment at CDTN/CNEN, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Maria Angela de B.C., E-mail: menezes@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Jacimovic, Radojko, E-mail: radojko.jacimovic@ijs.s [Jozef Stefan Institute, Ljubljana (Slovenia). Dept. of Environmental Sciences. Group for Radiochemistry and Radioecology

    2011-07-01

    In order to improve the application of the neutron activation technique at CDTN/CNEN, the large sample instrumental neutron activation analysis is being established, IAEA BRA 14798 and FAPEMIG APQ-01259-09 projects. This procedure, LS-INAA, usually requires special facilities for the activation as well as for the detection. However, the TRIGA Mark I IPR R1, CDTN/CNEN has not been adapted for the irradiation and the usual gamma spectrometry has being carried out. To start the establishment of the LS-INAA, a 5g sample - IAEA/Soil 7 reference material was analyzed by k{sub 0}-standardized method. This paper is about the detector efficiency over the volume source using KayWin v2.23 and ANGLE V3.0 software. (author)

  1. Comparative Analysis of Clinical Samples Showing Weak Serum Reaction on AutoVue System Causing ABO Blood Typing Discrepancies.

    Science.gov (United States)

    Jo, Su Yeon; Lee, Ju Mi; Kim, Hye Lim; Sin, Kyeong Hwa; Lee, Hyeon Ji; Chang, Chulhun Ludgerus; Kim, Hyung Hoi

    2017-03-01

    ABO blood typing in pre-transfusion testing is a major component of the high workload in blood banks that therefore requires automation. We often experienced discrepant results from an automated system, especially weak serum reactions. We evaluated the discrepant results by the reference manual method to confirm ABO blood typing. In total, 13,113 blood samples were tested with the AutoVue system; all samples were run in parallel with the reference manual method according to the laboratory protocol. The AutoVue system confirmed ABO blood typing of 12,816 samples (97.7%), and these results were concordant with those of the manual method. The remaining 297 samples (2.3%) showed discrepant results in the AutoVue system and were confirmed by the manual method. The discrepant results involved weak serum reactions (serum reactions, samples from patients who had received stem cell transplants, ABO subgroups, and specific system error messages. Among the 98 samples showing ≤1+ reaction grade in the AutoVue system, 70 samples (71.4%) showed a normal serum reaction (≥2+ reaction grade) with the manual method, and 28 samples (28.6%) showed weak serum reaction in both methods. ABO blood tying of 97.7% samples could be confirmed by the AutoVue system and a small proportion (2.3%) needed to be re-evaluated by the manual method. Samples with a 2+ reaction grade in serum typing do not need to be evaluated manually, while those with ≤1+ reaction grade do.

  2. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  3. Evaluation of environmental sampling methods for detection of Salmonella enterica in a large animal veterinary hospital.

    Science.gov (United States)

    Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey

    2018-04-01

    Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.

  4. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  5. Oxalic acid as a liquid dosimeter for absorbed dose measurement in large-scale of sample solution

    International Nuclear Information System (INIS)

    Biramontri, S.; Dechburam, S.; Vitittheeranon, A.; Wanitsuksombut, W.; Thongmitr, W.

    1999-01-01

    This study shows the feasibility for, applying 2.5 mM aqueous oxalic acid solution using spectrophotometric analysis method for absorbed dose measurement from 1 to 10 kGy in a large-scale of sample solution. The optimum wavelength of 220 nm was selected. The stability of the response of the dosimeter over 25 days was better than 1 % for unirradiated and ± 2% for irradiated solution. The reproducibility in the same batch was within 1%. The variation of the dosimeter response between batches was also studied. (author)

  6. Implicit and explicit anti-fat bias among a large sample of medical doctors by BMI, race/ethnicity and gender.

    Directory of Open Access Journals (Sweden)

    Janice A Sabin

    Full Text Available Overweight patients report weight discrimination in health care settings and subsequent avoidance of routine preventive health care. The purpose of this study was to examine implicit and explicit attitudes about weight among a large group of medical doctors (MDs to determine the pervasiveness of negative attitudes about weight among MDs. Test-takers voluntarily accessed a public Web site, known as Project Implicit®, and opted to complete the Weight Implicit Association Test (IAT (N = 359,261. A sub-sample identified their highest level of education as MD (N = 2,284. Among the MDs, 55% were female, 78% reported their race as white, and 62% had a normal range BMI. This large sample of test-takers showed strong implicit anti-fat bias (Cohen's d = 1.0. MDs, on average, also showed strong implicit anti-fat bias (Cohen's d = 0.93. All test-takers and the MD sub-sample reported a strong preference for thin people rather than fat people or a strong explicit anti-fat bias. We conclude that strong implicit and explicit anti-fat bias is as pervasive among MDs as it is among the general public. An important area for future research is to investigate the association between providers' implicit and explicit attitudes about weight, patient reports of weight discrimination in health care, and quality of care delivered to overweight patients.

  7. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  8. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  9. CORRELATION ANALYSIS OF A LARGE SAMPLE OF NARROW-LINE SEYFERT 1 GALAXIES: LINKING CENTRAL ENGINE AND HOST PROPERTIES

    International Nuclear Information System (INIS)

    Xu Dawei; Komossa, S.; Wang Jing; Yuan Weimin; Zhou Hongyan; Lu Honglin; Li Cheng; Grupe, Dirk

    2012-01-01

    We present a statistical study of a large, homogeneously analyzed sample of narrow-line Seyfert 1 (NLS1) galaxies, accompanied by a comparison sample of broad-line Seyfert 1 (BLS1) galaxies. Optical emission-line and continuum properties are subjected to correlation analyses, in order to identify the main drivers of the correlation space of active galactic nuclei (AGNs), and of NLS1 galaxies in particular. For the first time, we have established the density of the narrow-line region as a key parameter in Eigenvector 1 space, as important as the Eddington ratio L/L Edd . This is important because it links the properties of the central engine with the properties of the host galaxy, i.e., the interstellar medium (ISM). We also confirm previously found correlations involving the line width of Hβ and the strength of the Fe II and [O III] λ5007 emission lines, and we confirm the important role played by L/L Edd in driving the properties of NLS1 galaxies. A spatial correlation analysis shows that large-scale environments of the BLS1 and NLS1 galaxies of our sample are similar. If mergers are rare in our sample, accretion-driven winds, on the one hand, or bar-driven inflows, on the other hand, may account for the strong dependence of Eigenvector 1 on ISM density.

  10. Elemental mapping of large samples by external ion beam analysis with sub-millimeter resolution and its applications

    Science.gov (United States)

    Silva, T. F.; Rodrigues, C. L.; Added, N.; Rizzutto, M. A.; Tabacniks, M. H.; Mangiarotti, A.; Curado, J. F.; Aguirre, F. R.; Aguero, N. F.; Allegro, P. R. P.; Campos, P. H. O. V.; Restrepo, J. M.; Trindade, G. F.; Antonio, M. R.; Assis, R. F.; Leite, A. R.

    2018-05-01

    The elemental mapping of large areas using ion beam techniques is a desired capability for several scientific communities, involved on topics ranging from geoscience to cultural heritage. Usually, the constraints for large-area mapping are not met in setups employing micro- and nano-probes implemented all over the world. A novel setup for mapping large sized samples in an external beam was recently built at the University of São Paulo employing a broad MeV-proton probe with sub-millimeter dimension, coupled to a high-precision large range XYZ robotic stage (60 cm range in all axis and precision of 5 μ m ensured by optical sensors). An important issue on large area mapping is how to deal with the irregularities of the sample's surface, that may introduce artifacts in the images due to the variation of the measuring conditions. In our setup, we implemented an automatic system based on machine vision to correct the position of the sample to compensate for its surface irregularities. As an additional benefit, a 3D digital reconstruction of the scanned surface can also be obtained. Using this new and unique setup, we have produced large-area elemental maps of ceramics, stones, fossils, and other sort of samples.

  11. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  12. Rapid separation method for {sup 237}Np and Pu isotopes in large soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod L., E-mail: sherrod.maxwell@srs.go [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States); Culligan, Brian K.; Noyes, Gary W. [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States)

    2011-07-15

    A new rapid method for the determination of {sup 237}Np and Pu isotopes in soil and sediment samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used for large soil samples. The new soil method utilizes an acid leaching method, iron/titanium hydroxide precipitation, a lanthanum fluoride soil matrix removal step, and a rapid column separation process with TEVA Resin. The large soil matrix is removed easily and rapidly using these two simple precipitations with high chemical recoveries and effective removal of interferences. Vacuum box technology and rapid flow rates are used to reduce analytical time.

  13. Sampling data summary for the ninth run of the Large Slurry Fed Melter

    International Nuclear Information System (INIS)

    Sabatino, D.M.

    1983-01-01

    The ninth experimental run of the Large Slurry Fed Melter (LSFM) was completed June 27, 1983, after 63 days of continuous operation. During the run, the various melter and off-gas streams were sampled and analyzed to determine melter material balances and to characterize off-gas emissions. Sampling methods and preliminary results were reported earlier. The emphasis was on the chemical analyses of the off-gas entrainment, deposits, and scrubber liquid. The significant sampling results from the run are summarized below: Flushing the Frit 165 with Frit 131 without bubbler agitation required 3 to 4.5 melter volumes. The off-gas cesium concentration during feeding was on the order of 36 to 56 μgCs/scf. The cesium concentration in the melter plenum (based on air in leakage only) was on the order of 110 to 210 μgCs/scf. Using <1 micron as the cut point for semivolatile material 60% of the chloride, 35% of the sodium and less than 5% of the managanese and iron in the entrainment are present as semivolatiles. A material balance on the scrubber tank solids shows good agreement with entrainment data. An overall cesium balance using LSFM-9 data and the DWPF production rate indicates an emission of 0.11 mCi/yr of cesium from the DWPF off-gas. This is a factor of 27 less than the maximum allowable 3 mCi/yr

  14. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  15. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  16. Spatio-temporal foreshock activity during stick-slip experiments of large rock samples

    Science.gov (United States)

    Tsujimura, Y.; Kawakata, H.; Fukuyama, E.; Yamashita, F.; Xu, S.; Mizoguchi, K.; Takizawa, S.; Hirano, S.

    2016-12-01

    Foreshock activity has sometimes been reported for large earthquakes, and has been roughly classified into the following two classes. For shallow intraplate earthquakes, foreshocks occurred in the vicinity of the mainshock hypocenter (e.g., Doi and Kawakata, 2012; 2013). And for intraplate subduction earthquakes, foreshock hypocenters migrated toward the mainshock hypocenter (Kato, et al., 2012; Yagi et al., 2014). To understand how foreshocks occur, it is useful to investigate the spatio-temporal activities of foreshocks in the laboratory experiments under controlled conditions. We have conducted stick-slip experiments by using a large-scale biaxial friction apparatus at NIED in Japan (e.g., Fukuyama et al., 2014). Our previous results showed that stick-slip events repeatedly occurred in a run, but only those later events were preceded by foreshocks. Kawakata et al. (2014) inferred that the gouge generated during the run was an important key for foreshock occurrence. In this study, we proceeded to carry out stick-slip experiments of large rock samples whose interface (fault plane) is 1.5 meter long and 0.5 meter wide. After some runs to generate fault gouge between the interface. In the current experiments, we investigated spatio-temporal activities of foreshocks. We detected foreshocks from waveform records of 3D array of piezo-electric sensors. Our new results showed that more than three foreshocks (typically about twenty) had occurred during each stick-slip event, in contrast to the few foreshocks observed during previous experiments without pre-existing gouge. Next, we estimated the hypocenter locations of the stick-slip events, and found that they were located near the opposite end to the loading point. In addition, we observed a migration of foreshock hypocenters toward the hypocenter of each stick-slip event. This suggests that the foreshock activity observed in our current experiments was similar to that for the interplate earthquakes in terms of the

  17. Development of digital gamma-activation autoradiography for analysis of samples of large area

    International Nuclear Information System (INIS)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I.

    2011-01-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  18. Development of digital gamma-activation autoradiography for analysis of samples of large area

    Energy Technology Data Exchange (ETDEWEB)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I. [Russian Academy of Sciences, Moscow (Russian Federation). Vernadsky Inst. of Geochemistry and Analytical Chemistry

    2011-07-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  19. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  20. Characterisation of large zooplankton sampled with two different gears during midwinter in Rijpfjorden, Svalbard

    Directory of Open Access Journals (Sweden)

    Błachowiak-Samołyk Katarzyna

    2017-12-01

    Full Text Available During a midwinter cruise north of 80°N to Rijpfjorden, Svalbard, the composition and vertical distribution of the zooplankton community were studied using two different samplers 1 a vertically hauled multiple plankton sampler (MPS; mouth area 0.25 m2, mesh size 200 μm and 2 a horizontally towed Methot Isaacs Kidd trawl (MIK; mouth area 3.14 m2, mesh size 1500 μm. Our results revealed substantially higher species diversity (49 taxa than if a single sampler (MPS: 38 taxa, MIK: 28 had been used. The youngest stage present (CIII of Calanus spp. (including C. finmarchicus and C. glacialis was sampled exclusively by the MPS, and the frequency of CIV copepodites in MPS was double that than in MIK samples. In contrast, catches of the CV-CVI copepodites of Calanus spp. were substantially higher in the MIK samples (3-fold and 5-fold higher for adult males and females, respectively. The MIK sampling clearly showed that the highest abundances of all three Thysanoessa spp. were in the upper layers, although there was a tendency for the larger-sized euphausiids to occur deeper. Consistent patterns for the vertical distributions of the large zooplankters (e.g. ctenophores, euphausiids collected by the MPS and MIK samplers provided more complete data on their abundances and sizes than obtained by the single net. Possible mechanisms contributing to the observed patterns of distribution, e.g. high abundances of both Calanus spp. and their predators (ctenophores and chaetognaths in the upper water layers during midwinter are discussed.

  1. Relationship of fish indices with sampling effort and land use change in a large Mediterranean river.

    Science.gov (United States)

    Almeida, David; Alcaraz-Hernández, Juan Diego; Merciai, Roberto; Benejam, Lluís; García-Berthou, Emili

    2017-12-15

    Fish are invaluable ecological indicators in freshwater ecosystems but have been less used for ecological assessments in large Mediterranean rivers. We evaluated the effects of sampling effort (transect length) on fish metrics, such as species richness and two fish indices (the new European Fish Index EFI+ and a regional index, IBICAT2b), in the mainstem of a large Mediterranean river. For this purpose, we sampled by boat electrofishing five sites each with 10 consecutive transects corresponding to a total length of 20 times the river width (European standard required by the Water Framework Directive) and we also analysed the effect of sampling area on previous surveys. Species accumulation curves and richness extrapolation estimates in general suggested that species richness was reasonably estimated with transect lengths of 10 times the river width or less. The EFI+ index was significantly affected by sampling area, both for our samplings and previous data. Surprisingly, EFI+ values in general decreased with increasing sampling area, despite the higher observed richness, likely because the expected values of metrics were higher. By contrast, the regional fish index was not dependent on sampling area, likely because it does not use a predictive model. Both fish indices, but particularly the EFI+, decreased with less forest cover percentage, even within the smaller disturbance gradient in the river type studied (mainstem of a large Mediterranean river, where environmental pressures are more general). Although the two fish-based indices are very different in terms of their development, methodology, and metrics used, they were significantly correlated and provided a similar assessment of ecological status. Our results reinforce the importance of standardization of sampling methods for bioassessment and suggest that predictive models that use sampling area as a predictor might be more affected by differences in sampling effort than simpler biotic indices. Copyright

  2. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  3. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    International Nuclear Information System (INIS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-01-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)

  4. Determination of 129I in large soil samples after alkaline wet disintegration

    International Nuclear Information System (INIS)

    Bunzl, K.; Kracke, W.

    1992-01-01

    Large soil samples (up to 500 g) can conveniently be disintegrated by hydrogen peroxide in an utility tank under alkaline conditions to determine subsequently 129 I by neutron activation analysis. Interfering elements such as Br are removed already before neutron irradiation to reduce the radiation exposure of the personnel. The precision of the method is 129 I also by the combustion method. (orig.)

  5. Determinants of salivary evening alpha-amylase in a large sample free of psychopathology

    NARCIS (Netherlands)

    Veen, Gerthe; Giltay, Erik J.; Vreeburg, Sophie A.; Licht, Carmilla M. M.; Cobbaert, Christa M.; Zitman, Frans G.; Penninx, Brenda W. J. H.

    Objective: Recently, salivary alpha-amylase (sAA) has been proposed as a suitable index for sympathetic activity and dysregulation of the autonomic nervous system (ANS). Although determinants of sAA have been described, they have not been studied within the same study with a large sample size

  6. Water pollution screening by large-volume injection of aqueous samples and application to GC/MS analysis of a river Elbe sample

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, S.; Efer, J.; Engewald, W. [Leipzig Univ. (Germany). Inst. fuer Analytische Chemie

    1997-03-01

    The large-volume sampling of aqueous samples in a programmed temperature vaporizer (PTV) injector was used successfully for the target and non-target analysis of real samples. In this still rarely applied method, e.g., 1 mL of the water sample to be analyzed is slowly injected direct into the PTV. The vaporized water is eliminated through the split vent. The analytes are concentrated onto an adsorbent inside the insert and subsequently thermally desorbed. The capability of the method is demonstrated using a sample from the river Elbe. By means of coupling this method with a mass selective detector in SIM mode (target analysis) the method allows the determination of pollutants in the concentration range up to 0.01 {mu}g/L. Furthermore, PTV enrichment is an effective and time-saving method for non-target analysis in SCAN mode. In a sample from the river Elbe over 20 compounds were identified. (orig.) With 3 figs., 2 tabs.

  7. A hard-to-read font reduces the framing effect in a large sample.

    Science.gov (United States)

    Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik

    2018-04-01

    How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.

  8. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  9. Association between genetic variation in a region on chromosome 11 and schizophrenia in large samples from Europe

    DEFF Research Database (Denmark)

    Rietschel, M; Mattheisen, M; Degenhardt, F

    2012-01-01

    the recruitment of very large samples of patients and controls (that is tens of thousands), or large, potentially more homogeneous samples that have been recruited from confined geographical areas using identical diagnostic criteria. Applying the latter strategy, we performed a genome-wide association study (GWAS...... between emotion regulation and cognition that is structurally and functionally abnormal in SCZ and bipolar disorder.Molecular Psychiatry advance online publication, 12 July 2011; doi:10.1038/mp.2011.80....

  10. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  11. Psychometric Evaluation of the Thought–Action Fusion Scale in a Large Clinical Sample

    Science.gov (United States)

    Meyer, Joseph F.; Brown, Timothy A.

    2015-01-01

    This study examined the psychometric properties of the 19-item Thought–Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct. PMID:22315482

  12. Psychometric evaluation of the thought-action fusion scale in a large clinical sample.

    Science.gov (United States)

    Meyer, Joseph F; Brown, Timothy A

    2013-12-01

    This study examined the psychometric properties of the 19-item Thought-Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct.

  13. Lack of association between digit ratio (2D:4D) and assertiveness: replication in a large sample.

    Science.gov (United States)

    Voracek, Martin

    2009-12-01

    Findings regarding within-sex associations of digit ratio (2D:4D), a putative pointer to long-lasting effects of prenatal androgen action, and sexually differentiated personality traits have generally been inconsistent or unreplicable, suggesting that effects in this domain, if any, are likely small. In contrast to evidence from Wilson's important 1983 study, a forerunner of modern 2D:4D research, two recent studies in 2005 and 2008 by Freeman, et al. and Hampson, et al. showed assertiveness, a presumably male-typed personality trait, was not associated with 2D:4D; however, these studies were clearly statistically underpowered. Hence this study examined this question anew, based on a large sample of 491 men and 627 women. Assertiveness was only modestly sexually differentiated, favoring men, and a positive correlate of age and education and a negative correlate of weight and Body Mass Index among women, but not men. Replicating the two prior studies, 2D:4D was throughout unrelated to assertiveness scores. This null finding was preserved with controls for correlates of assertiveness, also in nonparametric analysis and with tests for curvilinear relations. Discussed are implications of this specific null finding, now replicated in a large sample, for studies of 2D:4D and personality in general and novel research approaches to proceed in this field.

  14. Neighborhood diversity of large trees shows independent species patterns in a mixed dipterocarp forest in Sri Lanka.

    Science.gov (United States)

    Punchi-Manage, Ruwan; Wiegand, Thorsten; Wiegand, Kerstin; Getzin, Stephan; Huth, Andreas; Gunatilleke, C V Savitri; Gunatilleke, I A U Nimal

    2015-07-01

    Interactions among neighboring individuals influence plant performance and should create spatial patterns in local community structure. In order to assess the role of large trees in generating spatial patterns in local species richness, we used the individual species-area relationship (ISAR) to evaluate the species richness of trees of different size classes (and dead trees) in circular neighborhoods with varying radius around large trees of different focal species. To reveal signals of species interactions, we compared the ISAR function of the individuals of focal species with that of randomly selected nearby locations. We expected that large trees should strongly affect the community structure of smaller trees in their neighborhood, but that these effects should fade away with increasing size class. Unexpectedly, we found that only few focal species showed signals of species interactions with trees of the different size classes and that this was less likely for less abundant focal species. However, the few and relatively weak departures from independence were consistent with expectations of the effect of competition for space and the dispersal syndrome on spatial patterns. A noisy signal of competition for space found for large trees built up gradually with increasing life stage; it was not yet present for large saplings but detectable for intermediates. Additionally, focal species with animal-dispersed seeds showed higher species richness in their neighborhood than those with gravity- and gyration-dispersed seeds. Our analysis across the entire ontogeny from recruits to large trees supports the hypothesis that stochastic effects dilute deterministic species interactions in highly diverse communities. Stochastic dilution is a consequence of the stochastic geometry of biodiversity in species-rich communities where the identities of the nearest neighbors of a given plant are largely unpredictable. While the outcome of local species interactions is governed for each

  15. Investigating sex differences in psychological predictors of snack intake among a large representative sample

    NARCIS (Netherlands)

    Adriaanse, M.A.; Evers, C.; Verhoeven, A.A.C.; de Ridder, D.T.D.

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of

  16. Using Co-Occurrence to Evaluate Belief Coherence in a Large Non Clinical Sample

    Science.gov (United States)

    Pechey, Rachel; Halligan, Peter

    2012-01-01

    Much of the recent neuropsychological literature on false beliefs (delusions) has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow “cohere” with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs). The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural). Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian’s coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance. PMID:23155383

  17. Using co-occurrence to evaluate belief coherence in a large non clinical sample.

    Directory of Open Access Journals (Sweden)

    Rachel Pechey

    Full Text Available Much of the recent neuropsychological literature on false beliefs (delusions has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow "cohere" with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs. The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural. Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian's coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance.

  18. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Sample preparation and analysis of large 238PuO2 and ThO2 spheres

    International Nuclear Information System (INIS)

    Wise, R.L.; Selle, J.E.

    1975-01-01

    A program was initiated to determine the density gradient across a large spherical 238 PuO 2 sample produced by vacuum hot pressing. Due to the high thermal output of the ceramic a thin section was necessary to prevent overheating of the plastic mount. Techniques were developed for cross sectioning, mounting, grinding, and polishing of the sample. The polished samples were then analyzed on a quantitative image analyzer to determine the density as a function of location across the sphere. The techniques for indexing, analyzing, and reducing the data are described. Typical results obtained on a ThO 2 simulant sphere are given

  20. Application of Conventional and K0-Based Internal Monostandard NAA Using Reactor Neutrons for Compositional Analysis of Large Samples

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.; Swain, K. K.; Pujari, P.K.

    2018-01-01

    Large sample neutron activation analysis (LSNAA) work was carried out for samples of coal, uranium ore, stainless steel, ancient and new clay potteries, dross and clay pottery replica from Peru using low flux high thermalized irradiation sites. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated using thermal column (TC) facility of Apsara reactor as well as graphite reflector position of critical facility (CF) at Bhabha Atomic Research Centre, Mumbai. Small size (10 - 500 mg) samples were also irradiated at core position of Apsara reactor, pneumatic carrier facility (PCF) of Dhruva reactor and pneumatic fast transfer facility (PFTS) of KAMINI reactor. Irradiation positions were characterized using indium flux monitor for TC and CF whereas multi monitors were used at other positions. Radioactive assay was carried out using high resolution gamma ray spectrometry. The k0-based internal monostandard NAA (IM-NAA) method was used to determine elemental concentration ratios with respect to Na in coal and uranium ore samples, Sc in pottery samples and Fe in stainless steel. Insitu relative detection efficiency for each irradiated sample was obtained using γ rays of activation products in the required energy range. Representative sample sizes were arrived at for coal and uranium ore from the plots of La/Na ratios as a function of the mass of the sample. For stainless steel sample of SS 304L, the absolute concentrations were calculated from concentration ratios by mass balance approach since all the major elements (Fe, Cr, Ni and Mn) were amenable to NAA. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. The La to Ce concentration ratios were used for preliminary grouping and concentration ratios of 15 elements with respect to Sc were used by statistical cluster analysis for confirmation of grouping. Concentrations of Au and Ag were determined in not so

  1. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-01

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.

  2. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students.

    Science.gov (United States)

    Wang, Song; Zhou, Ming; Chen, Taolin; Yang, Xun; Chen, Guangxiang; Wang, Meiyun; Gong, Qiyong

    2017-04-18

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphometry (VBM) approach. The whole-brain regression analyses showed that higher academic performance was related to greater regional gray matter density (rGMD) of the left dorsolateral prefrontal cortex (DLPFC), which is considered a neural center at the intersection of cognitive and non-cognitive functions. Furthermore, mediation analyses suggested that general intelligence partially mediated the impact of the left DLPFC density on academic performance. These results persisted even after adjusting for the effect of family socioeconomic status (SES). In short, our findings reveal a potential neuroanatomical marker for academic performance and highlight the role of general intelligence in explaining the relationship between brain structure and academic performance.

  3. 19 CFR 113.75 - Bond conditions for deferral of duty on large yachts imported for sale at United States boat shows.

    Science.gov (United States)

    2010-04-01

    ... yachts imported for sale at United States boat shows. 113.75 Section 113.75 Customs Duties U.S. CUSTOMS... Customs Bond Conditions § 113.75 Bond conditions for deferral of duty on large yachts imported for sale at....C. 1484b for a dutiable large yacht imported for sale at a United States boat show must conform to...

  4. Cosmological implications of a large complete quasar sample.

    Science.gov (United States)

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  5. A study of diabetes mellitus within a large sample of Australian twins

    DEFF Research Database (Denmark)

    Condon, Julianne; Shaw, Joanne E; Luciano, Michelle

    2008-01-01

    with type 2 diabetes (T2D), 41 female pairs with gestational diabetes (GD), 5 pairs with impaired glucose tolerance (IGT) and one pair with MODY. Heritabilities of T1D, T2D and GD were all high, but our samples did not have the power to detect effects of shared environment unless they were very large......Twin studies of diabetes mellitus can help elucidate genetic and environmental factors in etiology and can provide valuable biological samples for testing functional hypotheses, for example using expression and methylation studies of discordant pairs. We searched the volunteer Australian Twin...... Registry (19,387 pairs) for twins with diabetes using disease checklists from nine different surveys conducted from 1980-2000. After follow-up questionnaires to the twins and their doctors to confirm diagnoses, we eventually identified 46 pairs where one or both had type 1 diabetes (T1D), 113 pairs...

  6. BROAD ABSORPTION LINE DISAPPEARANCE ON MULTI-YEAR TIMESCALES IN A LARGE QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Filiz Ak, N.; Brandt, W. N.; Schneider, D. P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario M3J 1P3 (Canada); Anderson, S. F.; Gibson, R. R. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States); Lundgren, B. F. [Department of Physics, Yale University, New Haven, CT 06511 (United States); Myers, A. D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Petitjean, P. [Institut d' Astrophysique de Paris, Universite Paris 6, F-75014, Paris (France); Ross, Nicholas P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 92420 (United States); Shen Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-51, Cambridge, MA 02138 (United States); York, D. G. [Department of Astronomy and Astrophysics, and Enrico Fermi Institute, University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 (United States); Bizyaev, D.; Brinkmann, J.; Malanushenko, E.; Oravetz, D. J.; Pan, K.; Simmons, A. E. [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349-0059 (United States); Weaver, B. A., E-mail: nfilizak@astro.psu.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2012-10-01

    We present 21 examples of C IV broad absorption line (BAL) trough disappearance in 19 quasars selected from systematic multi-epoch observations of 582 bright BAL quasars (1.9 < z < 4.5) by the Sloan Digital Sky Survey-I/II (SDSS-I/II) and SDSS-III. The observations span 1.1-3.9 yr rest-frame timescales, longer than have been sampled in many previous BAL variability studies. On these timescales, Almost-Equal-To 2.3% of C IV BAL troughs disappear and Almost-Equal-To 3.3% of BAL quasars show a disappearing trough. These observed frequencies suggest that many C IV BAL absorbers spend on average at most a century along our line of sight to their quasar. Ten of the 19 BAL quasars showing C IV BAL disappearance have apparently transformed from BAL to non-BAL quasars; these are the first reported examples of such transformations. The BAL troughs that disappear tend to be those with small-to-moderate equivalent widths, relatively shallow depths, and high outflow velocities. Other non-disappearing C IV BALs in those nine objects having multiple troughs tend to weaken when one of them disappears, indicating a connection between the disappearing and non-disappearing troughs, even for velocity separations as large as 10,000-15,000 km s{sup -1}. We discuss possible origins of this connection including disk-wind rotation and changes in shielding gas.

  7. Is Business Failure Due to Lack of Effort? Empirical Evidence from a Large Administrative Sample

    NARCIS (Netherlands)

    Ejrnaes, M.; Hochguertel, S.

    2013-01-01

    Does insurance provision reduce entrepreneurs' effort to avoid business failure? We exploit unique features of the voluntary Danish unemployment insurance (UI) scheme, that is available to the self-employed. Using a large sample of self-employed individuals, we estimate the causal effect of

  8. In-situ high resolution particle sampling by large time sequence inertial spectrometry

    International Nuclear Information System (INIS)

    Prodi, V.; Belosi, F.

    1990-09-01

    In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)

  9. Effects of large volume injection of aliphatic alcohols as sample diluents on the retention of low hydrophobic solutes in reversed-phase liquid chromatography.

    Science.gov (United States)

    David, Victor; Galaon, Toma; Aboul-Enein, Hassan Y

    2014-01-03

    Recent studies showed that injection of large volume of hydrophobic solvents used as sample diluents could be applied in reversed-phase liquid chromatography (RP-LC). This study reports a systematic research focused on the influence of a series of aliphatic alcohols (from methanol to 1-octanol) on the retention process in RP-LC, when large volumes of sample are injected on the column. Several model analytes with low hydrophobic character were studied by RP-LC process, for mobile phases containing methanol or acetonitrile as organic modifiers in different proportions with aqueous component. It was found that starting with 1-butanol, the aliphatic alcohols can be used as sample solvents and they can be injected in high volumes, but they may influence the retention factor and peak shape of the dissolved solutes. The dependence of the retention factor of the studied analytes on the injection volume of these alcohols is linear, with a decrease of its value as the sample volume is increased. The retention process in case of injecting up to 200μL of upper alcohols is dependent also on the content of the organic modifier (methanol or acetonitrile) in mobile phase. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  11. Report: Independent Environmental Sampling Shows Some Properties Designated by EPA as Available for Use Had Some Contamination

    Science.gov (United States)

    Report #15-P-0221, July 21, 2015. Some OIG sampling results showed contamination was still present at sites designated by the EPA as ready for reuse. This was unexpected and could signal a need to implement changes to ensure human health protection.

  12. Waardenburg syndrome: Novel mutations in a large Brazilian sample.

    Science.gov (United States)

    Bocángel, Magnolia Astrid Pretell; Melo, Uirá Souto; Alves, Leandro Ucela; Pardono, Eliete; Lourenço, Naila Cristina Vilaça; Marcolino, Humberto Vicente Cezar; Otto, Paulo Alberto; Mingroni-Netto, Regina Célia

    2018-06-01

    This paper deals with the molecular investigation of Waardenburg syndrome (WS) in a sample of 49 clinically diagnosed probands (most from southeastern Brazil), 24 of them having the type 1 (WS1) variant (10 familial and 14 isolated cases) and 25 being affected by the type 2 (WS2) variant (five familial and 20 isolated cases). Sequential Sanger sequencing of all coding exons of PAX3, MITF, EDN3, EDNRB, SOX10 and SNAI2 genes, followed by CNV detection by MLPA of PAX3, MITF and SOX10 genes in selected cases revealed many novel pathogenic variants. Molecular screening, performed in all patients, revealed 19 causative variants (19/49 = 38.8%), six of them being large whole-exon deletions detected by MLPA, seven (four missense and three nonsense substitutions) resulting from single nucleotide substitutions (SNV), and six representing small indels. A pair of dizygotic affected female twins presented the c.430delC variant in SOX10, but the mutation, imputed to gonadal mosaicism, was not found in their unaffected parents. At least 10 novel causative mutations, described in this paper, were found in this Brazilian sample. Copy-number-variation detected by MLPA identified the causative mutation in 12.2% of our cases, corresponding to 31.6% of all causative mutations. In the majority of cases, the deletions were sporadic, since they were not present in the parents of isolated cases. Our results, as a whole, reinforce the fact that the screening of copy-number-variants by MLPA is a powerful tool to identify the molecular cause in WS patients. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  13. 19 CFR Appendix C to Part 113 - Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Bond for Deferral of Duty on Large Yachts Imported... Appendix C to Part 113—Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows Bond for Deferral of Duty on Large Yachts Imported for Sale at United States Boat Shows ____, as...

  14. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  15. The Brief Negative Symptom Scale (BNSS): Independent validation in a large sample of Italian patients with schizophrenia.

    Science.gov (United States)

    Mucci, A; Galderisi, S; Merlotti, E; Rossi, A; Rocca, P; Bucci, P; Piegari, G; Chieffi, M; Vignapiano, A; Maj, M

    2015-07-01

    The Brief Negative Symptom Scale (BNSS) was developed to address the main limitations of the existing scales for the assessment of negative symptoms of schizophrenia. The initial validation of the scale by the group involved in its development demonstrated good convergent and discriminant validity, and a factor structure confirming the two domains of negative symptoms (reduced emotional/verbal expression and anhedonia/asociality/avolition). However, only relatively small samples of patients with schizophrenia were investigated. Further independent validation in large clinical samples might be instrumental to the broad diffusion of the scale in clinical research. The present study aimed to examine the BNSS inter-rater reliability, convergent/discriminant validity and factor structure in a large Italian sample of outpatients with schizophrenia. Our results confirmed the excellent inter-rater reliability of the BNSS (the intraclass correlation coefficient ranged from 0.81 to 0.98 for individual items and was 0.98 for the total score). The convergent validity measures had r values from 0.62 to 0.77, while the divergent validity measures had r values from 0.20 to 0.28 in the main sample (n=912) and in a subsample without clinically significant levels of depression and extrapyramidal symptoms (n=496). The BNSS factor structure was supported in both groups. The study confirms that the BNSS is a promising measure for quantifying negative symptoms of schizophrenia in large multicenter clinical studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  16. Large-volume injection of sample diluents not miscible with the mobile phase as an alternative approach in sample preparation for bioanalysis: an application for fenspiride bioequivalence.

    Science.gov (United States)

    Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor

    2011-09-01

    Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.

  17. Psychometric Properties of the Penn State Worry Questionnaire for Children in a Large Clinical Sample

    Science.gov (United States)

    Pestle, Sarah L.; Chorpita, Bruce F.; Schiffman, Jason

    2008-01-01

    The Penn State Worry Questionnaire for Children (PSWQ-C; Chorpita, Tracey, Brown, Collica, & Barlow, 1997) is a 14-item self-report measure of worry in children and adolescents. Although the PSWQ-C has demonstrated favorable psychometric properties in small clinical and large community samples, this study represents the first psychometric…

  18. Analysis of reflection-peak wavelengths of sampled fiber Bragg gratings with large chirp.

    Science.gov (United States)

    Zou, Xihua; Pan, Wei; Luo, Bin

    2008-09-10

    The reflection-peak wavelengths (RPWs) in the spectra of sampled fiber Bragg gratings with large chirp (SFBGs-LC) are theoretically investigated. Such RPWs are divided into two parts, the RPWs of equivalent uniform SFBGs (U-SFBGs) and the wavelength shift caused by the large chirp in the grating period (CGP). We propose a quasi-equivalent transform to deal with the CGP. That is, the CGP is transferred into quasi-equivalent phase shifts to directly derive the Fourier transform of the refractive index modulation. Then, in the case of both the direct and the inverse Talbot effect, the wavelength shift is obtained from the Fourier transform. Finally, the RPWs of SFBGs-LC can be achieved by combining the wavelength shift and the RPWs of equivalent U-SFBGs. Several simulations are shown to numerically confirm these predicted RPWs of SFBGs-LC.

  19. Application of k0-based internal monostandard NAA for large sample analysis of clay pottery. As a part of inter comparison exercise

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2014-01-01

    As a part of inter comparison exercise of an IAEA Coordinated Research Project on large sample neutron activation analysis, a large size and non standard geometry size pottery replica (obtained from Peru) was analyzed by k 0 -based internal monostandard neutron activation analysis (IM-NAA). Two large size sub samples (0.40 and 0.25 kg) were irradiated at graphite reflector position of AHWR Critical Facility in BARC, Trombay, Mumbai, India. Small samples (100-200 mg) were also analyzed by IM-NAA for comparison purpose. Radioactive assay was carried out using a 40 % relative efficiency HPGe detector. To examine homogeneity of the sample, counting was also carried out using X-Z rotary scanning unit. In situ relative detection efficiency was evaluated using gamma rays of the activation products in the irradiated sample in the energy range of 122-2,754 keV. Elemental concentration ratios with respect to Na of small size (100 mg mass) as well as large size (15 and 400 g) samples were used to check the homogeneity of the samples. Concentration ratios of 18 elements such as K, Sc, Cr, Mn, Fe, Co, Zn, As, Rb, Cs, La, Ce, Sm, Eu, Yb, Lu, Hf and Th with respect to Na (internal mono standard) were calculated using IM-NAA. Absolute concentrations were arrived at for both large and small samples using Na concentration, obtained from relative method of NAA. The percentage combined uncertainties at ±1 s confidence limit on the determined values were in the range of 3-9 %. Two IAEA reference materials SL-1 and SL-3 were analyzed by IM-NAA to evaluate accuracy of the method. (author)

  20. Scanning tunneling spectroscopy under large current flow through the sample.

    Science.gov (United States)

    Maldonado, A; Guillamón, I; Suderow, H; Vieira, S

    2011-07-01

    We describe a method to make scanning tunneling microscopy/spectroscopy imaging at very low temperatures while driving a constant electric current up to some tens of mA through the sample. It gives a new local probe, which we term current driven scanning tunneling microscopy/spectroscopy. We show spectroscopic and topographic measurements under the application of a current in superconducting Al and NbSe(2) at 100 mK. Perspective of applications of this local imaging method includes local vortex motion experiments, and Doppler shift local density of states studies.

  1. Neurocognitive impairment in a large sample of homeless adults with mental illness.

    Science.gov (United States)

    Stergiopoulos, V; Cusi, A; Bekele, T; Skosireva, A; Latimer, E; Schütz, C; Fernando, I; Rourke, S B

    2015-04-01

    This study examines neurocognitive functioning in a large, well-characterized sample of homeless adults with mental illness and assesses demographic and clinical factors associated with neurocognitive performance. A total of 1500 homeless adults with mental illness enrolled in the At Home Chez Soi study completed neuropsychological measures assessing speed of information processing, memory, and executive functioning. Sociodemographic and clinical data were also collected. Linear regression analyses were conducted to examine factors associated with neurocognitive performance. Approximately half of our sample met criteria for psychosis, major depressive disorder, and alcohol or substance use disorder, and nearly half had experienced severe traumatic brain injury. Overall, 72% of participants demonstrated cognitive impairment, including deficits in processing speed (48%), verbal learning (71%) and recall (67%), and executive functioning (38%). The overall statistical model explained 19.8% of the variance in the neurocognitive summary score, with reduced neurocognitive performance associated with older age, lower education, first language other than English or French, Black or Other ethnicity, and the presence of psychosis. Homeless adults with mental illness experience impairment in multiple neuropsychological domains. Much of the variance in our sample's cognitive performance remains unexplained, highlighting the need for further research in the mechanisms underlying cognitive impairment in this population. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    International Nuclear Information System (INIS)

    Kasviki, K.; Stamatelatos, I.E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-01-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv

  3. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kasviki, K. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece); Stamatelatos, I.E. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece)], E-mail: ion@ipta.demokritos.gr; Yannakopoulou, E. [Institute of Physical Chemistry, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Papadopoulou, P. [Institute of Technology of Agricultural Products, NAGREF, Lycovrissi, Attikis 14123 (Greece); Kalef-Ezra, J. [Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece)

    2007-10-15

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  4. Large sample NAA of a pottery replica utilizing thermal neutron flux at AHWR critical facility and X-Z rotary scanning unit

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2013-01-01

    Large sample neutron activation analysis (LSNAA) of a clay pottery replica from Peru was carried out using low neutron flux graphite reflector position of Advanced Heavy Water Reactor (AHWR) critical facility. This work was taken up as a part of inter-comparison exercise under IAEA CRP on LSNAA of archaeological objects. Irradiated large size sample, placed on an X-Z rotary scanning unit, was assayed using a 40% relative efficiency HPGe detector. The k 0 -based internal monostandard NAA (IM-NAA) in conjunction with insitu relative detection efficiency was used to calculate concentration ratios of 12 elements with respect to Na. Analyses of both small and large size samples were carried out to check homogeneity and to arrive at absolute concentrations. (author)

  5. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  6. Some connections between importance sampling and enhanced sampling methods in molecular dynamics.

    Science.gov (United States)

    Lie, H C; Quer, J

    2017-11-21

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  7. "Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2011-09-01

    Full Text Available Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found not to have modeled the analyses to take account of the complex sample (Johnson & Elliott, 1998 even when publishing in highly-regarded journals. It is well known that failure to appropriately model the complex sample can substantially bias the results of the analysis. Examples presented in this paper highlight the risk of error of inference and mis-estimation of parameters from failure to analyze these data sets appropriately.

  8. Large scale sample management and data analysis via MIRACLE

    DEFF Research Database (Denmark)

    Block, Ines; List, Markus; Pedersen, Marlene Lemvig

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. In the past years the technology advanced based on improved methods and protocols concerning sample preparation and printing, antibody selection, optimization...... of staining conditions and mode of signal analysis. However, the sample management and data analysis still poses challenges because of the high number of samples, sample dilutions, customized array patterns, and various programs necessary for array construction and data processing. We developed...... a comprehensive and user-friendly web application called MIRACLE (MIcroarray R-based Analysis of Complex Lysate Experiments), which bridges the gap between sample management and array analysis by conveniently keeping track of the sample information from lysate preparation, through array construction and signal...

  9. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students

    OpenAIRE

    Song Wang; Ming Zhou; Taolin Chen; Xun Yang; Guangxiang Chen; Meiyun Wang; Qiyong Gong

    2017-01-01

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphome...

  10. Deal or No Deal? Decision Making under Risk in a Large-Stake TV Game Show and Related Experiments

    NARCIS (Netherlands)

    M.J. van den Assem (Martijn)

    2008-01-01

    textabstractThe central theme of this dissertation is the analysis of risky choice. The first two chapters analyze the choice behavior of contestants in a TV game show named “Deal or No Deal” (DOND). DOND provides a unique opportunity to study risk behavior, because it is characterized by very large

  11. Crowdsourcing for large-scale mosquito (Diptera: Culicidae) sampling

    Science.gov (United States)

    Sampling a cosmopolitan mosquito (Diptera: Culicidae) species throughout its range is logistically challenging and extremely resource intensive. Mosquito control programmes and regional networks operate at the local level and often conduct sampling activities across much of North America. A method f...

  12. Adolescent Internet Abuse: A Study on the Role of Attachment to Parents and Peers in a Large Community Sample.

    Science.gov (United States)

    Ballarotto, Giulia; Volpi, Barbara; Marzilli, Eleonora; Tambelli, Renata

    2018-01-01

    Adolescents are the main users of new technologies and their main purpose of use is social interaction. Although new technologies are useful to teenagers, in addressing their developmental tasks, recent studies have shown that they may be an obstacle in their growth. Research shows that teenagers with Internet addiction experience lower quality in their relationships with parents and more individual difficulties. However, limited research is available on the role played by adolescents' attachment to parents and peers, considering their psychological profiles. We evaluated in a large community sample of adolescents ( N = 1105) the Internet use/abuse, the adolescents' attachment to parents and peers, and their psychological profiles. Hierarchical regression analyses were conducted to verify the influence of parental and peer attachment on Internet use/abuse, considering the moderating effect of adolescents' psychopathological risk. Results showed that adolescents' attachment to parents had a significant effect on Internet use. Adolescents' psychopathological risk had a moderating effect on the relationship between attachment to mothers and Internet use. Our study shows that further research is needed, taking into account both individual and family variables.

  13. Adolescent Internet Abuse: A Study on the Role of Attachment to Parents and Peers in a Large Community Sample

    Directory of Open Access Journals (Sweden)

    Giulia Ballarotto

    2018-01-01

    Full Text Available Adolescents are the main users of new technologies and their main purpose of use is social interaction. Although new technologies are useful to teenagers, in addressing their developmental tasks, recent studies have shown that they may be an obstacle in their growth. Research shows that teenagers with Internet addiction experience lower quality in their relationships with parents and more individual difficulties. However, limited research is available on the role played by adolescents’ attachment to parents and peers, considering their psychological profiles. We evaluated in a large community sample of adolescents (N=1105 the Internet use/abuse, the adolescents’ attachment to parents and peers, and their psychological profiles. Hierarchical regression analyses were conducted to verify the influence of parental and peer attachment on Internet use/abuse, considering the moderating effect of adolescents’ psychopathological risk. Results showed that adolescents’ attachment to parents had a significant effect on Internet use. Adolescents’ psychopathological risk had a moderating effect on the relationship between attachment to mothers and Internet use. Our study shows that further research is needed, taking into account both individual and family variables.

  14. SyPRID sampler: A large-volume, high-resolution, autonomous, deep-ocean precision plankton sampling system

    Science.gov (United States)

    Billings, Andrew; Kaiser, Carl; Young, Craig M.; Hiebert, Laurel S.; Cole, Eli; Wagner, Jamie K. S.; Van Dover, Cindy Lee

    2017-03-01

    The current standard for large-volume (thousands of cubic meters) zooplankton sampling in the deep sea is the MOCNESS, a system of multiple opening-closing nets, typically lowered to within 50 m of the seabed and towed obliquely to the surface to obtain low-spatial-resolution samples that integrate across 10 s of meters of water depth. The SyPRID (Sentry Precision Robotic Impeller Driven) sampler is an innovative, deep-rated (6000 m) plankton sampler that partners with the Sentry Autonomous Underwater Vehicle (AUV) to obtain paired, large-volume plankton samples at specified depths and survey lines to within 1.5 m of the seabed and with simultaneous collection of sensor data. SyPRID uses a perforated Ultra-High-Molecular-Weight (UHMW) plastic tube to support a fine mesh net within an outer carbon composite tube (tube-within-a-tube design), with an axial flow pump located aft of the capture filter. The pump facilitates flow through the system and reduces or possibly eliminates the bow wave at the mouth opening. The cod end, a hollow truncated cone, is also made of UHMW plastic and includes a collection volume designed to provide an area where zooplankton can collect, out of the high flow region. SyPRID attaches as a saddle-pack to the Sentry vehicle. Sentry itself is configured with a flight control system that enables autonomous survey paths to low altitudes. In its verification deployment at the Blake Ridge Seep (2160 m) on the US Atlantic Margin, SyPRID was operated for 6 h at an altitude of 5 m. It recovered plankton samples, including delicate living larvae, from the near-bottom stratum that is seldom sampled by a typical MOCNESS tow. The prototype SyPRID and its next generations will enable studies of plankton or other particulate distributions associated with localized physico-chemical strata in the water column or above patchy habitats on the seafloor.

  15. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  16. Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey

    Directory of Open Access Journals (Sweden)

    Steven R. Corman

    2013-12-01

    Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.

  17. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-21

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer

  18. Ultrasensitive multiplex optical quantification of bacteria in large samples of biofluids

    Science.gov (United States)

    Pazos-Perez, Nicolas; Pazos, Elena; Catala, Carme; Mir-Simon, Bernat; Gómez-de Pedro, Sara; Sagales, Juan; Villanueva, Carlos; Vila, Jordi; Soriano, Alex; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.

    2016-01-01

    Efficient treatments in bacterial infections require the fast and accurate recognition of pathogens, with concentrations as low as one per milliliter in the case of septicemia. Detecting and quantifying bacteria in such low concentrations is challenging and typically demands cultures of large samples of blood (~1 milliliter) extending over 24–72 hours. This delay seriously compromises the health of patients. Here we demonstrate a fast microorganism optical detection system for the exhaustive identification and quantification of pathogens in volumes of biofluids with clinical relevance (~1 milliliter) in minutes. We drive each type of bacteria to accumulate antibody functionalized SERS-labelled silver nanoparticles. Particle aggregation on the bacteria membranes renders dense arrays of inter-particle gaps in which the Raman signal is exponentially amplified by several orders of magnitude relative to the dispersed particles. This enables a multiplex identification of the microorganisms through the molecule-specific spectral fingerprints. PMID:27364357

  19. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping

    2015-06-24

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  20. Efficient computation of smoothing splines via adaptive basis sampling

    KAUST Repository

    Ma, Ping; Huang, Jianhua Z.; Zhang, Nan

    2015-01-01

    © 2015 Biometrika Trust. Smoothing splines provide flexible nonparametric regression estimators. However, the high computational cost of smoothing splines for large datasets has hindered their wide application. In this article, we develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Except for the univariate case where the Reinsch algorithm is applicable, a smoothing spline for a regression problem with sample size n can be expressed as a linear combination of n basis functions and its computational complexity is generally O(n3). We achieve a more scalable computation in the multivariate case by evaluating the smoothing spline using a smaller set of basis functions, obtained by an adaptive sampling scheme that uses values of the response variable. Our asymptotic analysis shows that smoothing splines computed via adaptive basis sampling converge to the true function at the same rate as full basis smoothing splines. Using simulation studies and a large-scale deep earth core-mantle boundary imaging study, we show that the proposed method outperforms a sampling method that does not use the values of response variables.

  1. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Science.gov (United States)

    Dufour, Nicholas; Redcay, Elizabeth; Young, Liane; Mavros, Penelope L; Moran, Joseph M; Triantafyllou, Christina; Gabrieli, John D E; Saxe, Rebecca

    2013-01-01

    Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462) individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31), using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  2. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Directory of Open Access Journals (Sweden)

    Nicholas Dufour

    Full Text Available Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462 individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31, using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  3. Presence and significant determinants of cognitive impairment in a large sample of patients with multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Martina Borghi

    Full Text Available OBJECTIVES: To investigate the presence and the nature of cognitive impairment in a large sample of patients with Multiple Sclerosis (MS, and to identify clinical and demographic determinants of cognitive impairment in MS. METHODS: 303 patients with MS and 279 healthy controls were administered the Brief Repeatable Battery of Neuropsychological tests (BRB-N; measures of pre-morbid verbal competence and neuropsychiatric measures were also administered. RESULTS: Patients and healthy controls were matched for age, gender, education and pre-morbid verbal Intelligence Quotient. Patients presenting with cognitive impairment were 108/303 (35.6%. In the overall group of participants, the significant predictors of the most sensitive BRB-N scores were: presence of MS, age, education, and Vocabulary. The significant predictors when considering MS patients only were: course of MS, age, education, vocabulary, and depression. Using logistic regression analyses, significant determinants of the presence of cognitive impairment in relapsing-remitting MS patients were: duration of illness (OR = 1.053, 95% CI = 1.010-1.097, p = 0.015, Expanded Disability Status Scale score (OR = 1.247, 95% CI = 1.024-1.517, p = 0.028, and vocabulary (OR = 0.960, 95% CI = 0.936-0.984, p = 0.001, while in the smaller group of progressive MS patients these predictors did not play a significant role in determining the cognitive outcome. CONCLUSIONS: Our results corroborate the evidence about the presence and the nature of cognitive impairment in a large sample of patients with MS. Furthermore, our findings identify significant clinical and demographic determinants of cognitive impairment in a large sample of MS patients for the first time. Implications for further research and clinical practice were discussed.

  4. The ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES) . I. Project description, survey sample, and quality assessment

    Science.gov (United States)

    Cox, Nick L. J.; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J.; Marshall, Charlotte C. M.; Smith, Keith T.; Evans, Christopher J.; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A.; Joblin, Christine; van Loon, Jacco Th.; Foing, Bernard H.; Bhatt, Neil H.; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G.; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco

    2017-10-01

    The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the C60^+ fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort tosystematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral-type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution (R 70 000-100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305-1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided.

  5. Detecting superior face recognition skills in a large sample of young British adults

    Directory of Open Access Journals (Sweden)

    Anna Katarzyna Bobak

    2016-09-01

    Full Text Available The Cambridge Face Memory Test Long Form (CFMT+ and Cambridge Face Perception Test (CFPT are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognisers are discussed.

  6. Prevalence of learned grapheme-color pairings in a large online sample of synesthetes.

    Directory of Open Access Journals (Sweden)

    Nathan Witthoft

    Full Text Available In this paper we estimate the minimum prevalence of grapheme-color synesthetes with letter-color matches learned from an external stimulus, by analyzing a large sample of English-speaking grapheme-color synesthetes. We find that at least 6% (400/6588 participants of the total sample learned many of their matches from a widely available colored letter toy. Among those born in the decade after the toy began to be manufactured, the proportion of synesthetes with learned letter-color pairings approaches 15% for some 5-year periods. Among those born 5 years or more before it was manufactured, none have colors learned from the toy. Analysis of the letter-color matching data suggests the only difference between synesthetes with matches to the toy and those without is exposure to the stimulus. These data indicate learning of letter-color pairings from external contingencies can occur in a substantial fraction of synesthetes, and are consistent with the hypothesis that grapheme-color synesthesia is a kind of conditioned mental imagery.

  7. Analysis of plant hormones by microemulsion electrokinetic capillary chromatography coupled with on-line large volume sample stacking.

    Science.gov (United States)

    Chen, Zongbao; Lin, Zian; Zhang, Lin; Cai, Yan; Zhang, Lan

    2012-04-07

    A novel method of microemulsion electrokinetic capillary chromatography (MEEKC) coupled with on-line large volume sample stacking was developed for the analysis of six plant hormones including indole-3-acetic acid, indole-3-butyric acid, indole-3-propionic acid, 1-naphthaleneacetic acid, abscisic acid and salicylic acid. Baseline separation of six plant hormones was achieved within 10 min by using the microemulsion background electrolyte containing a 97.2% (w/w) 10 mM borate buffer at pH 9.2, 1.0% (w/w) ethyl acetate as oil droplets, 0.6% (w/w) sodium dodecyl sulphate as surfactant and 1.2% (w/w) 1-butanol as cosurfactant. In addition, an on-line concentration method based on a large volume sample stacking technique and multiple wavelength detection was adopted for improving the detection sensitivity in order to determine trace level hormones in a real sample. The optimal method provided about 50-100 fold increase in detection sensitivity compared with a single MEEKC method, and the detection limits (S/N = 3) were between 0.005 and 0.02 μg mL(-1). The proposed method was simple, rapid and sensitive and could be applied to the determination of six plant hormones in spiked water samples, tobacco leaves and 1-naphthylacetic acid in leaf fertilizer. The recoveries ranged from 76.0% to 119.1%, and good reproducibilities were obtained with relative standard deviations (RSDs) less than 6.6%.

  8. $b$-Tagging and Large Radius Jet Modelling in a $g\\rightarrow b\\bar{b}$ rich sample at ATLAS

    CERN Document Server

    Jiang, Zihao; The ATLAS collaboration

    2016-01-01

    Studies of b-tagging performance and jet properties in double b-tagged, large radius jets from sqrt(s)=8 TeV pp collisions recorded by the ATLAS detector at the LHC are presented. The double b-tag requirement yields a sample rich in high pT jets originating from the g->bb process. Using this sample, the performance of b-tagging and modelling of jet substructure variables at small b-quark angular separation is probed.

  9. A large-capacity sample-changer for automated gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An automatic sample-changer has been developed at the National Institute for Metallurgy for use in gamma-ray spectroscopy with a lithium-drifted germanium detector. The sample-changer features remote storage, which prevents cross-talk and reduces background. It has a capacity for 200 samples and a sample container that takes liquid or solid samples. The rotation and vibration of samples during counting ensure that powdered samples are compacted, and improve the precision and reproducibility of the counting geometry [af

  10. Examining the interrater reliability of the Hare Psychopathy Checklist-Revised across a large sample of trained raters.

    Science.gov (United States)

    Blais, Julie; Forth, Adelle E; Hare, Robert D

    2017-06-01

    The goal of the current study was to assess the interrater reliability of the Psychopathy Checklist-Revised (PCL-R) among a large sample of trained raters (N = 280). All raters completed PCL-R training at some point between 1989 and 2012 and subsequently provided complete coding for the same 6 practice cases. Overall, 3 major conclusions can be drawn from the results: (a) reliability of individual PCL-R items largely fell below any appropriate standards while the estimates for Total PCL-R scores and factor scores were good (but not excellent); (b) the cases representing individuals with high psychopathy scores showed better reliability than did the cases of individuals in the moderate to low PCL-R score range; and (c) there was a high degree of variability among raters; however, rater specific differences had no consistent effect on scoring the PCL-R. Therefore, despite low reliability estimates for individual items, Total scores and factor scores can be reliably scored among trained raters. We temper these conclusions by noting that scoring standardized videotaped case studies does not allow the rater to interact directly with the offender. Real-world PCL-R assessments typically involve a face-to-face interview and much more extensive collateral information. We offer recommendations for new web-based training procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Economic and Humanistic Burden of Osteoarthritis: A Systematic Review of Large Sample Studies.

    Science.gov (United States)

    Xie, Feng; Kovic, Bruno; Jin, Xuejing; He, Xiaoning; Wang, Mengxiao; Silvestre, Camila

    2016-11-01

    Osteoarthritis (OA) consumes a significant amount of healthcare resources, and impairs the health-related quality of life (HRQoL) of patients. Previous reviews have consistently found substantial variations in the costs of OA across studies and countries. The comparability between studies was poor and limited the detection of the true differences between these studies. To review large sample studies on measuring the economic and/or humanistic burden of OA published since May 2006. We searched MEDLINE and EMBASE databases using comprehensive search strategies to identify studies reporting economic burden and HRQoL of OA. We included large sample studies if they had a sample size ≥1000 and measured the cost and/or HRQoL of OA. Reviewers worked independently and in duplicate, performing a cross-check between groups to verify agreement. Within- and between-group consolidation was performed to resolve discrepancies, with outstanding discrepancies being resolved by an arbitrator. The Kappa statistic was reported to assess the agreement between the reviewers. All costs were adjusted in their original currency to year 2015 using published inflation rates for the country where the study was conducted, and then converted to 2015 US dollars. A total of 651 articles were screened by title and abstract, 94 were reviewed in full text, and 28 were included in the final review. The Kappa value was 0.794. Twenty studies reported direct costs and nine reported indirect costs. The total annual average direct costs varied from US$1442 to US$21,335, both in USA. The annual average indirect costs ranged from US$238 to US$29,935. Twelve studies measured HRQoL using various instruments. The Short Form 12 version 2 scores ranged from 35.0 to 51.3 for the physical component, and from 43.5 to 55.0 for the mental component. Health utilities varied from 0.30 for severe OA to 0.77 for mild OA. Per-patient OA costs are considerable and a patient's quality of life remains poor. Variations in

  12. Sampling or gambling

    Energy Technology Data Exchange (ETDEWEB)

    Gy, P.M.

    1981-12-01

    Sampling can be compared to no other technique. A mechanical sampler must above all be selected according to its aptitude for supressing or reducing all components of the sampling error. Sampling is said to be correct when it gives all elements making up the batch of matter submitted to sampling an uniform probability of being selected. A sampler must be correctly designed, built, installed, operated and maintained. When the conditions of sampling correctness are not strictly respected, the sampling error can no longer be controlled and can, unknown to the user, be unacceptably large: the sample is no longer representative. The implementation of an incorrect sampler is a form of gambling and this paper intends to show that at this game the user is nearly always the loser in the long run. The users' and the manufacturers' interests may diverge and the standards which should safeguard the users' interests very often fail to do so by tolerating or even recommending incorrect techniques such as the implementation of too narrow cutters traveling too fast through the stream to be sampled.

  13. Comparison of blood RNA isolation methods from samples stabilized in Tempus tubes and stored at a large human biobank.

    Science.gov (United States)

    Aarem, Jeanette; Brunborg, Gunnar; Aas, Kaja K; Harbak, Kari; Taipale, Miia M; Magnus, Per; Knudsen, Gun Peggy; Duale, Nur

    2016-09-01

    More than 50,000 adult and cord blood samples were collected in Tempus tubes and stored at the Norwegian Institute of Public Health Biobank for future use. In this study, we systematically evaluated and compared five blood-RNA isolation protocols: three blood-RNA isolation protocols optimized for simultaneous isolation of all blood-RNA species (MagMAX RNA Isolation Kit, both manual and semi-automated protocols; and Norgen Preserved Blood RNA kit I); and two protocols optimized for large RNAs only (Tempus Spin RNA, and Tempus 6-port isolation kit). We estimated the following parameters: RNA quality, RNA yield, processing time, cost per sample, and RNA transcript stability of six selected mRNAs and 13 miRNAs using real-time qPCR. Whole blood samples from adults (n = 59 tubes) and umbilical cord blood (n = 18 tubes) samples collected in Tempus tubes were analyzed. High-quality blood-RNAs with average RIN-values above seven were extracted using all five RNA isolation protocols. The transcript levels of the six selected genes showed minimal variation between the five protocols. Unexplained differences within the transcript levels of the 13 miRNA were observed; however, the 13 miRNAs had similar expression direction and they were within the same order of magnitude. Some differences in the RNA processing time and cost were noted. Sufficient amounts of high-quality RNA were obtained using all five protocols, and the Tempus blood RNA system therefore seems not to be dependent on one specific RNA isolation method.

  14. LOGISTICS OF ECOLOGICAL SAMPLING ON LARGE RIVERS

    Science.gov (United States)

    The objectives of this document are to provide an overview of the logistical problems associated with the ecological sampling of boatable rivers and to suggest solutions to those problems. It is intended to be used as a resource for individuals preparing to collect biological dat...

  15. Large-Scale No-Show Patterns and Distributions for Clinic Operational Research

    Directory of Open Access Journals (Sweden)

    Michael L. Davies

    2016-02-01

    Full Text Available Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA. This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14 for 555,183 patients. Multifactor analysis of variance (ANOVA was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75–79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.

  16. Large-Scale No-Show Patterns and Distributions for Clinic Operational Research.

    Science.gov (United States)

    Davies, Michael L; Goffman, Rachel M; May, Jerrold H; Monte, Robert J; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L

    2016-02-16

    Patient no-shows for scheduled primary care appointments are common. Unused appointment slots reduce patient quality of care, access to services and provider productivity while increasing loss to follow-up and medical costs. This paper describes patterns of no-show variation by patient age, gender, appointment age, and type of appointment request for six individual service lines in the United States Veterans Health Administration (VHA). This retrospective observational descriptive project examined 25,050,479 VHA appointments contained in individual-level records for eight years (FY07-FY14) for 555,183 patients. Multifactor analysis of variance (ANOVA) was performed, with no-show rate as the dependent variable, and gender, age group, appointment age, new patient status, and service line as factors. The analyses revealed that males had higher no-show rates than females to age 65, at which point males and females exhibited similar rates. The average no-show rates decreased with age until 75-79, whereupon rates increased. As appointment age increased, males and new patients had increasing no-show rates. Younger patients are especially prone to no-show as appointment age increases. These findings provide novel information to healthcare practitioners and management scientists to more accurately characterize no-show and attendance rates and the impact of certain patient factors. Future general population data could determine whether findings from VHA data generalize to others.

  17. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  18. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    Science.gov (United States)

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  19. Spearman's "law of diminishing returns" and the role of test reliability investigated in a large sample of Danish military draftees

    DEFF Research Database (Denmark)

    Teasdale, Thomas William; Hartmann, P.

    2005-01-01

    The present article investigates Spearman's "Law of Diminishing Returns" (SLODR), which hypothesizes that the g saturation for cognitive tests is lower for high ability subjects than for low ability subjects. This hypothesis was tested in a large sample of Danish military draftees (N = 6757) who...... were representative of the young adult male population, aged 18-19, and tested with a group-administered intelligence test comprised of four subtests. The aim of the study was twofold. The first was to reproduce previous SLODR findings by the present authors. This was done by replicating...... in reliability could account for the difference in g saturation across ability groups. The results showed that the reliability was larger for the High ability group, thereby not explaining the present findings....

  20. A Survey for Spectroscopic Binaries in a Large Sample of G Dwarfs

    Science.gov (United States)

    Udry, S.; Mayor, M.; Latham, D. W.; Stefanik, R. P.; Torres, G.; Mazeh, T.; Goldberg, D.; Andersen, J.; Nordstrom, B.

    For more than 5 years now, the radial velocities for a large sample of G dwarfs (3,347 stars) have been monitored in order to obtain an unequaled set of orbital parameters for solar-type stars (~400 orbits, up to now). This survey provides a considerable improvement on the classical systematic study by Duquennoy and Mayor (1991; DM91). The observational part of the survey has been carried out in the context of a collaboration between the Geneva Observatory on the two coravel spectrometers for the southern sky and CfA at Oakridge and Whipple Observatories for the northern sky. As a first glance at these new results, we will address in this contribution a special aspect of the orbital eccentricity distribution, namely the disappearance of the void observed in DM91 for quasi-circular orbits with periods larger than 10 days.

  1. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  2. Sampling in schools and large institutional buildings: Implications for regulations, exposure and management of lead and copper.

    Science.gov (United States)

    Doré, Evelyne; Deshommes, Elise; Andrews, Robert C; Nour, Shokoufeh; Prévost, Michèle

    2018-04-21

    Legacy lead and copper components are ubiquitous in plumbing of large buildings including schools that serve children most vulnerable to lead exposure. Lead and copper samples must be collected after varying stagnation times and interpreted in reference to different thresholds. A total of 130 outlets (fountains, bathroom and kitchen taps) were sampled for dissolved and particulate lead as well as copper. Sampling was conducted at 8 schools and 3 institutional (non-residential) buildings served by municipal water of varying corrosivity, with and without corrosion control (CC), and without a lead service line. Samples included first draw following overnight stagnation (>8h), partial (30 s) and fully (5 min) flushed, and first draw after 30 min of stagnation. Total lead concentrations in first draw samples after overnight stagnation varied widely from 0.07 to 19.9 μg Pb/L (median: 1.7 μg Pb/L) for large buildings served with non-corrosive water. Higher concentrations were observed in schools with corrosive water without CC (0.9-201 μg Pb/L, median: 14.3 μg Pb/L), while levels in schools with CC ranged from 0.2 to 45.1 μg Pb/L (median: 2.1 μg Pb/L). Partial flushing (30 s) and full flushing (5 min) reduced concentrations by 88% and 92% respectively for corrosive waters without CC. Lead concentrations were 45% than values in 1st draw samples collected after overnight stagnation. Concentrations of particulate Pb varied widely (≥0.02-846 μg Pb/L) and was found to be the cause of very high total Pb concentrations in the 2% of samples exceeding 50 μg Pb/L. Pb levels across outlets within the same building varied widely (up to 1000X) especially in corrosive water (0.85-851 μg Pb/L after 30MS) confirming the need to sample at each outlet to identify high risk taps. Based on the much higher concentrations observed in first draw samples, even after a short stagnation, the first 250mL should be discarded unless no sources

  3. Sample-based Attribute Selective AnDE for Large Data

    DEFF Research Database (Denmark)

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Henc...

  4. A large sample of Kohonen selected E+A (post-starburst) galaxies from the Sloan Digital Sky Survey

    Science.gov (United States)

    Meusinger, H.; Brünecke, J.; Schalldach, P.; in der Au, A.

    2017-01-01

    Context. The galaxy population in the contemporary Universe is characterised by a clear bimodality, blue galaxies with significant ongoing star formation and red galaxies with only a little. The migration between the blue and the red cloud of galaxies is an issue of active research. Post starburst (PSB) galaxies are thought to be observed in the short-lived transition phase. Aims: We aim to create a large sample of local PSB galaxies from the Sloan Digital Sky Survey (SDSS) to study their characteristic properties, particularly morphological features indicative of gravitational distortions and indications for active galactic nuclei (AGNs). Another aim is to present a tool set for an efficient search in a large database of SDSS spectra based on Kohonen self-organising maps (SOMs). Methods: We computed a huge Kohonen SOM for ∼106 spectra from SDSS data release 7. The SOM is made fully available, in combination with an interactive user interface, for the astronomical community. We selected a large sample of PSB galaxies taking advantage of the clustering behaviour of the SOM. The morphologies of both PSB galaxies and randomly selected galaxies from a comparison sample in SDSS Stripe 82 (S82) were inspected on deep co-added SDSS images to search for indications of gravitational distortions. We used the Portsmouth galaxy property computations to study the evolutionary stage of the PSB galaxies and archival multi-wavelength data to search for hidden AGNs. Results: We compiled a catalogue of 2665 PSB galaxies with redshifts z 3 Å and z cloud, in agreement with the idea that PSB galaxies represent the transitioning phase between actively and passively evolving galaxies. The relative frequency of distorted PSB galaxies is at least 57% for EW(Hδ) > 5 Å, significantly higher than in the comparison sample. The search for AGNs based on conventional selection criteria in the radio and MIR results in a low AGN fraction of ∼2-3%. We confirm an MIR excess in the mean SED of

  5. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  6. On the aspiration characteristics of large-diameter, thin-walled aerosol sampling probes at yaw orientations with respect to the wind

    International Nuclear Information System (INIS)

    Vincent, J.H.; Mark, D.; Smith, T.A.; Stevens, D.C.; Marshall, M.

    1986-01-01

    Experiments were carried out in a large wind tunnel to investigate the aspiration efficiencies of thin-walled aerosol sampling probes of large diameter (up to 50 mm) at orientations with respect to the wind direction ranging from 0 to 180 degrees. Sampling conditions ranged from sub-to super-isokinetic. The experiments employed test dusts of close-graded fused alumina and were conducted under conditions of controlled freestream turbulence. For orientations up to and including 90 degrees, the results were qualitatively and quantitatively consistent with a new physical model which takes account of the fact that the sampled air not only diverges or converges (depending on the relationship between wind speed and sampling velocity) but also turns to pass through the plane of the sampling orifice. The previously published results of Durham and Lundgren (1980) and Davies and Subari (1982) for smaller probes were also in good agreement with the new model. The model breaks down, however, for orientations greater than 90 degrees due to the increasing effect of particle impaction onto the blunt leading edge of the probe body. For the probe facing directly away from the wind (180 degree orientation), aspiration efficiency is dominated almost entirely by this effect. (author)

  7. Human blood RNA stabilization in samples collected and transported for a large biobank

    Science.gov (United States)

    2012-01-01

    Background The Norwegian Mother and Child Cohort Study (MoBa) is a nation-wide population-based pregnancy cohort initiated in 1999, comprising more than 108.000 pregnancies recruited between 1999 and 2008. In this study we evaluated the feasibility of integrating RNA analyses into existing MoBa protocols. We compared two different blood RNA collection tube systems – the PAXgene™ Blood RNA system and the Tempus™ Blood RNA system - and assessed the effects of suboptimal blood volumes in collection tubes and of transportation of blood samples by standard mail. Endpoints to characterize the samples were RNA quality and yield, and the RNA transcript stability of selected genes. Findings High-quality RNA could be extracted from blood samples stabilized with both PAXgene and Tempus tubes. The RNA yields obtained from the blood samples collected in Tempus tubes were consistently higher than from PAXgene tubes. Higher RNA yields were obtained from cord blood (3 – 4 times) compared to adult blood with both types of tubes. Transportation of samples by standard mail had moderate effects on RNA quality and RNA transcript stability; the overall RNA quality of the transported samples was high. Some unexplained changes in gene expression were noted, which seemed to correlate with suboptimal blood volumes collected in the tubes. Temperature variations during transportation may also be of some importance. Conclusions Our results strongly suggest that special collection tubes are necessary for RNA stabilization and they should be used for establishing new biobanks. We also show that the 50,000 samples collected in the MoBa biobank provide RNA of high quality and in sufficient amounts to allow gene expression analyses for studying the association of disease with altered patterns of gene expression. PMID:22988904

  8. Local heterogeneity effects on small-sample worths

    International Nuclear Information System (INIS)

    Schaefer, R.W.

    1986-01-01

    One of the parameters usually measured in a fast reactor critical assembly is the reactivity associated with inserting a small sample of a material into the core (sample worth). Local heterogeneities introduced by the worth measurement techniques can have a significant effect on the sample worth. Unfortunately, the capability is lacking to model some of the heterogeneity effects associated with the experimental technique traditionally used at ANL (the radial tube technique). It has been suggested that these effects could account for a large portion of what remains of the longstanding central worth discrepancy. The purpose of this paper is to describe a large body of experimental data - most of which has never been reported - that shows the effect of radial tube-related local heterogeneities

  9. Pattern transfer on large samples using a sub-aperture reactive ion beam

    Energy Technology Data Exchange (ETDEWEB)

    Miessler, Andre; Mill, Agnes; Gerlach, Juergen W.; Arnold, Thomas [Leibniz-Institut fuer Oberflaechenmodifizierung (IOM), Permoserstrasse 15, D-04318 Leipzig (Germany)

    2011-07-01

    In comparison to sole Ar ion beam sputtering Reactive Ion Beam Etching (RIBE) reveals the main advantage of increasing the selectivity for different kind of materials due to chemical contributions during the material removal. Therefore RIBE is qualified to be an excellent candidate for pattern transfer applications. The goal of the present study is to apply a sub-aperture reactive ion beam for pattern transfer on large fused silica samples. Concerning this matter, the etching behavior in the ion beam periphery plays a decisive role. Using CF{sub 4} as reactive gas, XPS measurements of the modified surface exposes impurities like Ni, Fe and Cr, which belongs to chemically eroded material of the plasma pot as well as an accumulation of carbon (up to 40 atomic percent) in the beam periphery, respectively. The substitution of CF{sub 4} by NF{sub 3} as reactive gas reveals a lot of benefits: more stable ion beam conditions in combination with a reduction of the beam size down to a diameter of 5 mm and a reduced amount of the Ni, Fe and Cr contaminations. However, a layer formation of silicon nitride handicaps the chemical contribution of the etching process. These negative side effects influence the transfer of trench structures on quartz by changing the selectivity due to altered chemical reaction of the modified resist layer. Concerning this we investigate the pattern transfer on large fused silica plates using NF{sub 3}-sub-aperture RIBE.

  10. Optimum method to determine radioactivity in large tracts of land. In-situ gamma spectroscopy or sampling followed by laboratory measurement

    International Nuclear Information System (INIS)

    Bronson, Frazier

    2008-01-01

    In the process of decommissioning contaminated facilities, and in the conduct of normal operations involving radioactive material, it is frequently required to show that large areas of land are not contaminated, or if contaminated that the amount is below an acceptable level. However, it is quite rare for the radioactivity in the soil to be uniformly distributed. Rather it is generally in a few isolated and probably unknown locations. One way to ascertain the status of the land concentration is to take soil samples for subsequent measurement in the laboratory. Another way is to use in-situ gamma spectroscopy. In both cases, the non-uniform distribution of radioactivity can greatly compromise the accuracy of the assay, and makes uncertainty estimates much more complicated than simple propagation of counting statistics. This paper examines the process of determining the best way to estimate the activity on the tract of land, and gives quantitative estimates of measurement uncertainty for various conditions of radioactivity. When the distribution of radioactivity in the soil is not homogeneous, the sampling uncertainty is likely to be larger than the in-situ measurement uncertainty. (author)

  11. Practical characterization of large networks using neighborhood information

    KAUST Repository

    Wang, Pinghui

    2018-02-14

    Characterizing large complex networks such as online social networks through node querying is a challenging task. Network service providers often impose severe constraints on the query rate, hence limiting the sample size to a small fraction of the total network of interest. Various ad hoc subgraph sampling methods have been proposed, but many of them give biased estimates and no theoretical basis on the accuracy. In this work, we focus on developing sampling methods for large networks where querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web APIs available on most major large networks for graph sampling. We show that our sampling method has provable convergence guarantees on being an unbiased estimator, and it is more accurate than state-of-the-art methods. We also explore methods to uncover shortest paths between a subset of nodes and detect high degree nodes by sampling only a small fraction of the network of interest. Our results demonstrate that utilizing neighborhood information yields methods that are two orders of magnitude faster than state-of-the-art methods.

  12. Association between time perspective and organic food consumption in a large sample of adults.

    Science.gov (United States)

    Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine

    2018-01-05

    Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of

  13. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  14. Solid-Phase Extraction and Large-Volume Sample Stacking-Capillary Electrophoresis for Determination of Tetracycline Residues in Milk

    Directory of Open Access Journals (Sweden)

    Gabriela Islas

    2018-01-01

    Full Text Available Solid-phase extraction in combination with large-volume sample stacking-capillary electrophoresis (SPE-LVSS-CE was applied to measure chlortetracycline, doxycycline, oxytetracycline, and tetracycline in milk samples. Under optimal conditions, the proposed method had a linear range of 29 to 200 µg·L−1, with limits of detection ranging from 18.6 to 23.8 µg·L−1 with inter- and intraday repeatabilities < 10% (as a relative standard deviation in all cases. The enrichment factors obtained were from 50.33 to 70.85 for all the TCs compared with a conventional capillary zone electrophoresis (CZE. This method is adequate to analyze tetracyclines below the most restrictive established maximum residue limits. The proposed method was employed in the analysis of 15 milk samples from different brands. Two of the tested samples were positive for the presence of oxytetracycline with concentrations of 95 and 126 µg·L−1. SPE-LVSS-CE is a robust, easy, and efficient strategy for online preconcentration of tetracycline residues in complex matrices.

  15. Hot seeding using large Y-123 seeds

    International Nuclear Information System (INIS)

    Scruggs, S J; Putman, P T; Zhou, Y X; Fang, H; Salama, K

    2006-01-01

    There are several motivations for increasing the diameter of melt textured single domain discs. The maximum magnetic field produced by a trapped field magnet is proportional to the radius of the sample. Furthermore, the availability of trapped field magnets with large diameter could enable their use in applications that have traditionally been considered to require wound electromagnets, such as beam bending magnets for particle accelerators and electric propulsion. We have investigated the possibility of using large area epitaxial growth instead of the conventional point nucleation growth mechanism. This process involves the use of large Y123 seeds for the purpose of increasing the maximum achievable Y123 single domain size. The hot seeding technique using large Y-123 seeds was employed to seed Y-123 samples. Trapped field measurements indicate that single domain samples were indeed grown by this technique. Microstructural evaluation indicates that growth can be characterized by a rapid nucleation followed by the usual peritectic grain growth which occurs when large seeds are used. Critical temperature measurements show that no local T c suppression occurs in the vicinity of the seed. This work supports the suggestion of using an iterative method for increasing the size of Y-123 single domains that can be grown

  16. Specific Antibodies Reacting with SV40 Large T Antigen Mimotopes in Serum Samples of Healthy Subjects.

    Directory of Open Access Journals (Sweden)

    Mauro Tognon

    Full Text Available Simian Virus 40, experimentally assayed in vitro in different animal and human cells and in vivo in rodents, was classified as a small DNA tumor virus. In previous studies, many groups identified Simian Virus 40 sequences in healthy individuals and cancer patients using PCR techniques, whereas others failed to detect the viral sequences in human specimens. These conflicting results prompted us to develop a novel indirect ELISA with synthetic peptides, mimicking Simian Virus 40 capsid viral protein antigens, named mimotopes. This immunologic assay allowed us to investigate the presence of serum antibodies against Simian Virus 40 and to verify whether Simian Virus 40 is circulating in humans. In this investigation two mimotopes from Simian Virus 40 large T antigen, the viral replication protein and oncoprotein, were employed to analyze for specific reactions to human sera antibodies. This indirect ELISA with synthetic peptides from Simian Virus 40 large T antigen was used to assay a new collection of serum samples from healthy subjects. This novel assay revealed that serum antibodies against Simian Virus 40 large T antigen mimotopes are detectable, at low titer, in healthy subjects aged from 18-65 years old. The overall prevalence of reactivity with the two Simian Virus 40 large T antigen peptides was 20%. This new ELISA with two mimotopes of the early viral regions is able to detect in a specific manner Simian Virus 40 large T antigen-antibody responses.

  17. Thinking about dying and trying and intending to die: results on suicidal behavior from a large Web-based sample.

    Science.gov (United States)

    de Araújo, Rafael M F; Mazzochi, Leonardo; Lara, Diogo R; Ottoni, Gustavo L

    2015-03-01

    Suicide is an important worldwide public health problem. The aim of this study was to characterize risk factors of suicidal behavior using a large Web-based sample. The data were collected by the Brazilian Internet Study on Temperament and Psychopathology (BRAINSTEP) from November 2010 to July 2011. Suicidal behavior was assessed by an instrument based on the Suicidal Behaviors Questionnaire. The final sample consisted of 48,569 volunteers (25.9% men) with a mean ± SD age of 30.7 ± 10.1 years. More than 60% of the sample reported having had at least a passing thought of killing themselves, and 6.8% of subjects had previously attempted suicide (64% unplanned). The demographic features with the highest risk of attempting suicide were female gender (OR = 1.82, 95% CI = 1.65 to 2.00); elementary school as highest education level completed (OR = 2.84, 95% CI = 2.48 to 3.25); being unable to work (OR = 5.32, 95% CI = 4.15 to 6.81); having no religion (OR = 2.08, 95% CI = 1.90 to 2.29); and, only for female participants, being married (OR = 1.19, 95% CI = 1.08 to 1.32) or divorced (OR = 1.66, 95% CI = 1.41 to 1.96). A family history of a suicide attempt and of a completed suicide showed the same increment in the risk of suicidal behavior. The higher the number of suicide attempts, the higher was the real intention to die (P < .05). Those who really wanted to die reported more emptiness/loneliness (OR = 1.58, 95% CI = 1.35 to 1.85), disconnection (OR = 1.54, 95% CI = 1.30 to 1.81), and hopelessness (OR = 1.74, 95% CI = 1.49 to 2.03), but their methods were not different from the methods of those who did not mean to die. This large Web survey confirmed results from previous studies on suicidal behavior and pointed out the relevance of the number of previous suicide attempts and of a positive family history, even for a noncompleted suicide, as important risk factors. © Copyright 2015 Physicians Postgraduate Press, Inc.

  18. Assessing the validity of single-item life satisfaction measures: results from three large samples.

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E

    2014-12-01

    The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS)-a more psychometrically established measure. Two large samples from Washington (N = 13,064) and Oregon (N = 2,277) recruited by the Behavioral Risk Factor Surveillance System and a representative German sample (N = 1,312) recruited by the Germany Socio-Economic Panel were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62-0.64; disattenuated r = 0.78-0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001-0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS was very small (average absolute difference = 0.015-0.042). Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use.

  19. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  20. A simulative comparison of respondent driven sampling with incentivized snowball sampling--the "strudel effect".

    Science.gov (United States)

    Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2014-02-01

    Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Relationships between anhedonia, suicidal ideation and suicide attempts in a large sample of physicians.

    Directory of Open Access Journals (Sweden)

    Gwenolé Loas

    Full Text Available The relationships between anhedonia and suicidal ideation or suicide attempts were explored in a large sample of physicians using the interpersonal psychological theory of suicide. We tested two hypotheses: firstly, that there is a significant relationship between anhedonia and suicidality and, secondly, that anhedonia could mediate the relationships between suicidal ideation or suicide attempts and thwarted belongingness or perceived burdensomeness.In a cross-sectional study, 557 physicians filled out several questionnaires measuring suicide risk, depression, using the abridged version of the Beck Depression Inventory (BDI-13, and demographic and job-related information. Ratings of anhedonia, perceived burdensomeness and thwarted belongingness were then extracted from the BDI-13 and the other questionnaires.Significant relationships were found between anhedonia and suicidal ideation or suicide attempts, even when significant variables or covariates were taken into account and, in particular, depressive symptoms. Mediation analyses showed significant partial or complete mediations, where anhedonia mediated the relationships between suicidal ideation (lifetime or recent and perceived burdensomeness or thwarted belongingness. For suicide attempts, complete mediation was found only between anhedonia and thwarted belongingness. When the different components of anhedonia were taken into account, dissatisfaction-not the loss of interest or work inhibition-had significant relationships with suicidal ideation, whereas work inhibition had significant relationships with suicide attempts.Anhedonia and its component of dissatisfaction could be a risk factor for suicidal ideation and could mediate the relationship between suicidal ideation and perceived burdensomeness or thwarted belongingness in physicians. Dissatisfaction, in particular in the workplace, may be explored as a strong predictor of suicidal ideation in physicians.

  2. Characterizing the zenithal night sky brightness in large territories: how many samples per square kilometre are needed?

    Science.gov (United States)

    Bará, Salvador

    2018-01-01

    A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.

  3. Detecting the Land-Cover Changes Induced by Large-Physical Disturbances Using Landscape Metrics, Spatial Sampling, Simulation and Spatial Analysis

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2009-08-01

    Full Text Available The objectives of the study are to integrate the conditional Latin Hypercube Sampling (cLHS, sequential Gaussian simulation (SGS and spatial analysis in remotely sensed images, to monitor the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial heterogeneity and variability. The multiple NDVI images demonstrate that spatial patterns of disturbed landscapes were successfully delineated by spatial analysis such as variogram, Moran’I and landscape metrics in the study area. The hybrid method delineates the spatial patterns and spatial variability of landscapes caused by these large disturbances. The cLHS approach is applied to select samples from Normalized Difference Vegetation Index (NDVI images from SPOT HRV images in the Chenyulan watershed of Taiwan, and then SGS with sufficient samples is used to generate maps of NDVI images. In final, the NDVI simulated maps are verified using indexes such as the correlation coefficient and mean absolute error (MAE. Therefore, the statistics and spatial structures of multiple NDVI images present a very robust behavior, which advocates the use of the index for the quantification of the landscape spatial patterns and land cover change. In addition, the results transferred by Open Geospatial techniques can be accessed from web-based and end-user applications of the watershed management.

  4. Methodology for Quantitative Analysis of Large Liquid Samples with Prompt Gamma Neutron Activation Analysis using Am-Be Source

    International Nuclear Information System (INIS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.

    2009-01-01

    An optimized set-up for prompt gamma neutron activation analysis (PGNAA) with Am-Be source is described and used for large liquid samples analysis. A methodology for quantitative analysis is proposed: it consists on normalizing the prompt gamma count rates with thermal neutron flux measurements carried out with He-3 detector and gamma attenuation factors calculated using MCNP-5. The relative and absolute methods are considered. This methodology is then applied to the determination of cadmium in industrial phosphoric acid. The same sample is then analyzed by inductively coupled plasma (ICP) method. Our results are in good agreement with those obtained with ICP method.

  5. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  6. Fast crawling methods of exploring content distributed over large graphs

    KAUST Repository

    Wang, Pinghui

    2018-03-15

    Despite recent effort to estimate topology characteristics of large graphs (e.g., online social networks and peer-to-peer networks), little attention has been given to develop a formal crawling methodology to characterize the vast amount of content distributed over these networks. Due to the large-scale nature of these networks and a limited query rate imposed by network service providers, exhaustively crawling and enumerating content maintained by each vertex is computationally prohibitive. In this paper, we show how one can obtain content properties by crawling only a small fraction of vertices and collecting their content. We first show that when sampling is naively applied, this can produce a huge bias in content statistics (i.e., average number of content replicas). To remove this bias, one may use maximum likelihood estimation to estimate content characteristics. However, our experimental results show that this straightforward method requires to sample most vertices to obtain accurate estimates. To address this challenge, we propose two efficient estimators: special copy estimator (SCE) and weighted copy estimator (WCE) to estimate content characteristics using available information in sampled content. SCE uses the special content copy indicator to compute the estimate, while WCE derives the estimate based on meta-information in sampled vertices. We conduct experiments on a variety of real-word and synthetic datasets, and the results show that WCE and SCE are cost effective and also “asymptotically unbiased”. Our methodology provides a new tool for researchers to efficiently query content distributed in large-scale networks.

  7. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  8. Gasoline prices, gasoline consumption, and new-vehicle fuel economy: Evidence for a large sample of countries

    International Nuclear Information System (INIS)

    Burke, Paul J.; Nishitateno, Shuhei

    2013-01-01

    Countries differ considerably in terms of the price drivers pay for gasoline. This paper uses data for 132 countries for the period 1995–2008 to investigate the implications of these differences for the consumption of gasoline for road transport. To address the potential for simultaneity bias, we use both a country's oil reserves and the international crude oil price as instruments for a country's average gasoline pump price. We obtain estimates of the long-run price elasticity of gasoline demand of between − 0.2 and − 0.5. Using newly available data for a sub-sample of 43 countries, we also find that higher gasoline prices induce consumers to substitute to vehicles that are more fuel-efficient, with an estimated elasticity of + 0.2. Despite the small size of our elasticity estimates, there is considerable scope for low-price countries to achieve gasoline savings and vehicle fuel economy improvements via reducing gasoline subsidies and/or increasing gasoline taxes. - Highlights: ► We estimate the determinants of gasoline demand and new-vehicle fuel economy. ► Estimates are for a large sample of countries for the period 1995–2008. ► We instrument for gasoline prices using oil reserves and the world crude oil price. ► Gasoline demand and fuel economy are inelastic with respect to the gasoline price. ► Large energy efficiency gains are possible via higher gasoline prices

  9. Heritability of psoriasis in a large twin sample

    DEFF Research Database (Denmark)

    Lønnberg, Ann Sophie; Skov, Liselotte; Skytthe, A

    2013-01-01

    AIM: To study the concordance of psoriasis in a population-based twin sample. METHODS: Data on psoriasis in 10,725 twin pairs, 20-71 years of age, from the Danish Twin Registry was collected via a questionnaire survey. The concordance and heritability of psoriasis were estimated. RESULTS: In total...

  10. The vascular disrupting agent ZD6126 shows increased antitumor efficacy and enhanced radiation response in large, advanced tumors

    International Nuclear Information System (INIS)

    Siemann, Dietmar W.; Rojiani, Amyn M.

    2005-01-01

    Purpose: ZD6126 is a vascular-targeting agent that induces selective effects on the morphology of proliferating and immature endothelial cells by disrupting the tubulin cytoskeleton. The efficacy of ZD6126 was investigated in large vs. small tumors in a variety of animal models. Methods and Materials: Three rodent tumor models (KHT, SCCVII, RIF-1) and three human tumor xenografts (Caki-1, KSY-1, SKBR3) were used. Mice bearing leg tumors ranging in size from 0.1-2.0 g were injected intraperitoneally with a single 150 mg/kg dose of ZD6126. The response was assessed by morphologic and morphometric means as well as an in vivo to in vitro clonogenic cell survival assay. To examine the impact of tumor size on the extent of enhancement of radiation efficacy by ZD6126, KHT sarcomas of three different sizes were irradiated locally with a range of radiation doses, and cell survival was determined. Results: All rodent tumors and human tumor xenografts evaluated showed a strong correlation between increasing tumor size and treatment effect as determined by clonogenic cell survival. Detailed evaluation of KHT sarcomas treated with ZD6126 showed a reduction in patent tumor blood vessels that was ∼20% in small ( 90% in large (>1.0 g) tumors. Histologic assessment revealed that the extent of tumor necrosis after ZD6126 treatment, although minimal in small KHT sarcomas, became more extensive with increasing tumor size. Clonogenic cell survival after ZD6126 exposure showed a decrease in tumor surviving fraction from approximately 3 x 10 -1 to 1 x 10 -4 with increasing tumor size. When combined with radiotherapy, ZD6126 treatment resulted in little enhancement of the antitumor effect of radiation in small (<0.3 g) tumors but marked increases in cell kill in tumors larger than 1.0 g. Conclusions: Because bulky neoplastic disease is typically the most difficult to manage, the present findings provide further support for the continued development of vascular disrupting agents such as

  11. Extreme value statistics and thermodynamics of earthquakes. Large earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lavenda, B. [Camerino Univ., Camerino, MC (Italy); Cipollone, E. [ENEA, Centro Ricerche Casaccia, S. Maria di Galeria, RM (Italy). National Centre for Research on Thermodynamics

    2000-06-01

    A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershocks sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Frechet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions show that self-similar power laws are transformed into non scaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Frechet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same catalogue of Chinese earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Frechet distribution. Earthquake temperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  12. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  13. Association between the prevalence of depression and age in a large representative German sample of people aged 53 to 80 years.

    Science.gov (United States)

    Wild, Beate; Herzog, Wolfgang; Schellberg, Dieter; Lechner, Sabine; Niehoff, Doro; Brenner, Hermann; Rothenbacher, Dietrich; Stegmaier, Christa; Raum, Elke

    2012-04-01

    The aim of the study was to determine the association between the prevalence of clinically significant depression and age in a large representative sample of elderly German people. In the second follow-up (2005-2007) of the ESTHER cohort study, the 15-item geriatric depression scale (GDS-15) as well as a sociodemographic and clinical questionnaire were administered to a representative sample of 8270 people of ages 53 to 80 years. The prevalence of clinically significant depression was estimated using a GDS cut-off score of 5/6. Prevalence rates were estimated for the different age categories. Association between depression and age was analyzed using logistic regression, adjusted for gender, co-morbid medical disorders, education, marital status, physical activity, smoking, self-perceived cognitive impairment, and anti-depressive medication. Of the participants, 7878 (95.3%) completed more than twelve GDS items and were included in the study. The prevalence of clinically significant depression was 16.0% (95%CI = [15.2; 16.6]). The function of depression prevalence dependent on age group showed a U-shaped pattern (53-59: 21.0%, CI = [18.9; 23.3]; 60-64: 17.7%, CI = [15.7; 19.7]; 65-69: 12.6%, CI = [11.2; 14.0]; 70-74: 14.4%, CI = [12.6; 16.0]; 75-80: 17.1%, CI = [14.9; 19.4]). Adjusted odds ratios showed that the chances of being depressive decrease with the age category but remain relatively stable for people aged 65 and over. The prevalence of depression in the elderly seems to be associated with the age category. Adjusted odds ratios showed that people aged 60 and older had lower chances of being depressive than people aged 53 to 59 years. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  15. Religion and the Unmaking of Prejudice toward Muslims: Evidence from a Large National Sample

    Science.gov (United States)

    Shaver, John H.; Troughton, Geoffrey; Sibley, Chris G.; Bulbulia, Joseph A.

    2016-01-01

    In the West, anti-Muslim sentiments are widespread. It has been theorized that inter-religious tensions fuel anti-Muslim prejudice, yet previous attempts to isolate sectarian motives have been inconclusive. Factors contributing to ambiguous results are: (1) failures to assess and adjust for multi-level denomination effects; (2) inattention to demographic covariates; (3) inadequate methods for comparing anti-Muslim prejudice relative to other minority group prejudices; and (4) ad hoc theories for the mechanisms that underpin prejudice and tolerance. Here we investigate anti-Muslim prejudice using a large national sample of non-Muslim New Zealanders (N = 13,955) who responded to the 2013 New Zealand Attitudes and Values Study. We address previous shortcomings by: (1) building Bayesian multivariate, multi-level regression models with denominations modeled as random effects; (2) including high-resolution demographic information that adjusts for factors known to influence prejudice; (3) simultaneously evaluating the relative strength of anti-Muslim prejudice by comparing it to anti-Arab prejudice and anti-immigrant prejudice within the same statistical model; and (4) testing predictions derived from the Evolutionary Lag Theory of religious prejudice and tolerance. This theory predicts that in countries such as New Zealand, with historically low levels of conflict, religion will tend to increase tolerance generally, and extend to minority religious groups. Results show that anti-Muslim and anti-Arab sentiments are confounded, widespread, and substantially higher than anti-immigrant sentiments. In support of the theory, the intensity of religious commitments was associated with a general increase in tolerance toward minority groups, including a poorly tolerated religious minority group: Muslims. Results clarify religion’s power to enhance tolerance in peaceful societies that are nevertheless afflicted by prejudice. PMID:26959976

  16. Religion and the Unmaking of Prejudice toward Muslims: Evidence from a Large National Sample.

    Science.gov (United States)

    Shaver, John H; Troughton, Geoffrey; Sibley, Chris G; Bulbulia, Joseph A

    2016-01-01

    In the West, anti-Muslim sentiments are widespread. It has been theorized that inter-religious tensions fuel anti-Muslim prejudice, yet previous attempts to isolate sectarian motives have been inconclusive. Factors contributing to ambiguous results are: (1) failures to assess and adjust for multi-level denomination effects; (2) inattention to demographic covariates; (3) inadequate methods for comparing anti-Muslim prejudice relative to other minority group prejudices; and (4) ad hoc theories for the mechanisms that underpin prejudice and tolerance. Here we investigate anti-Muslim prejudice using a large national sample of non-Muslim New Zealanders (N = 13,955) who responded to the 2013 New Zealand Attitudes and Values Study. We address previous shortcomings by: (1) building Bayesian multivariate, multi-level regression models with denominations modeled as random effects; (2) including high-resolution demographic information that adjusts for factors known to influence prejudice; (3) simultaneously evaluating the relative strength of anti-Muslim prejudice by comparing it to anti-Arab prejudice and anti-immigrant prejudice within the same statistical model; and (4) testing predictions derived from the Evolutionary Lag Theory of religious prejudice and tolerance. This theory predicts that in countries such as New Zealand, with historically low levels of conflict, religion will tend to increase tolerance generally, and extend to minority religious groups. Results show that anti-Muslim and anti-Arab sentiments are confounded, widespread, and substantially higher than anti-immigrant sentiments. In support of the theory, the intensity of religious commitments was associated with a general increase in tolerance toward minority groups, including a poorly tolerated religious minority group: Muslims. Results clarify religion's power to enhance tolerance in peaceful societies that are nevertheless afflicted by prejudice.

  17. Relationships between anhedonia, suicidal ideation and suicide attempts in a large sample of physicians

    Science.gov (United States)

    Lefebvre, Guillaume; Rotsaert, Marianne; Englert, Yvon

    2018-01-01

    Background The relationships between anhedonia and suicidal ideation or suicide attempts were explored in a large sample of physicians using the interpersonal psychological theory of suicide. We tested two hypotheses: firstly, that there is a significant relationship between anhedonia and suicidality and, secondly, that anhedonia could mediate the relationships between suicidal ideation or suicide attempts and thwarted belongingness or perceived burdensomeness. Methods In a cross-sectional study, 557 physicians filled out several questionnaires measuring suicide risk, depression, using the abridged version of the Beck Depression Inventory (BDI-13), and demographic and job-related information. Ratings of anhedonia, perceived burdensomeness and thwarted belongingness were then extracted from the BDI-13 and the other questionnaires. Results Significant relationships were found between anhedonia and suicidal ideation or suicide attempts, even when significant variables or covariates were taken into account and, in particular, depressive symptoms. Mediation analyses showed significant partial or complete mediations, where anhedonia mediated the relationships between suicidal ideation (lifetime or recent) and perceived burdensomeness or thwarted belongingness. For suicide attempts, complete mediation was found only between anhedonia and thwarted belongingness. When the different components of anhedonia were taken into account, dissatisfaction—not the loss of interest or work inhibition—had significant relationships with suicidal ideation, whereas work inhibition had significant relationships with suicide attempts. Conclusions Anhedonia and its component of dissatisfaction could be a risk factor for suicidal ideation and could mediate the relationship between suicidal ideation and perceived burdensomeness or thwarted belongingness in physicians. Dissatisfaction, in particular in the workplace, may be explored as a strong predictor of suicidal ideation in physicians

  18. Age differences in personality traits from 10 to 65: Big Five domains and facets in a large cross-sectional sample.

    Science.gov (United States)

    Soto, Christopher J; John, Oliver P; Gosling, Samuel D; Potter, Jeff

    2011-02-01

    Hypotheses about mean-level age differences in the Big Five personality domains, as well as 10 more specific facet traits within those domains, were tested in a very large cross-sectional sample (N = 1,267,218) of children, adolescents, and adults (ages 10-65) assessed over the World Wide Web. The results supported several conclusions. First, late childhood and adolescence were key periods. Across these years, age trends for some traits (a) were especially pronounced, (b) were in a direction different from the corresponding adult trends, or (c) first indicated the presence of gender differences. Second, there were some negative trends in psychosocial maturity from late childhood into adolescence, whereas adult trends were overwhelmingly in the direction of greater maturity and adjustment. Third, the related but distinguishable facet traits within each broad Big Five domain often showed distinct age trends, highlighting the importance of facet-level research for understanding life span age differences in personality. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  19. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  20. An examination of smoking behavior and opinions about smoke-free environments in a large sample of sexual and gender minority community members.

    Science.gov (United States)

    McElroy, Jane A; Everett, Kevin D; Zaniletti, Isabella

    2011-06-01

    The purpose of this study is to more completely quantify smoking rate and support for smoke-free policies in private and public environments from a large sample of self-identified sexual and gender minority (SGM) populations. A targeted sampling strategy recruited participants from 4 Missouri Pride Festivals and online surveys targeted to SGM populations during the summer of 2008. A 24-item survey gathered information on gender and sexual orientation, smoking status, and questions assessing behaviors and preferences related to smoke-free policies. The project recruited participants through Pride Festivals (n = 2,676) and Web-based surveys (n = 231) representing numerous sexual and gender orientations and the racial composite of the state of Missouri. Differences were found between the Pride Festivals sample and the Web-based sample, including smoking rates, with current smoking for the Web-based sample (22%) significantly less than the Pride Festivals sample (37%; p times more likely to be current smokers compared with the study's heterosexual group (n = 436; p = .005). Statistically fewer SGM racial minorities (33%) are current smokers compared with SGM Whites (37%; p = .04). Support and preferences for public and private smoke-free environments were generally low in the SGM population. The strategic targeting method achieved a large and diverse sample. The findings of high rates of smoking coupled with generally low levels of support for smoke-free public policies in the SGM community highlight the need for additional research to inform programmatic attempts to reduce tobacco use and increase support for smoke-free environments.

  1. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  2. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  3. CASP10-BCL::Fold efficiently samples topologies of large proteins.

    Science.gov (United States)

    Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens

    2015-03-01

    During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.

  4. Does Shyness Vary According to Attained Social Roles? Trends Across Age Groups in a Large British Sample.

    Science.gov (United States)

    Van Zalk, Nejra; Lamb, Michael E; Jason Rentfrow, Peter

    2017-12-01

    The current study investigated (a) how a composite measure of shyness comprising introversion and neuroticism relates to other well-known constructs involving social fears, and (b) whether mean levels of shyness vary for men and women depending on the adoption of various social roles. Study 1 used a sample of 211 UK participants aged 17-70 (64% female; M age  = 47.90). Study 2 used data from a large cross-sectional data set with UK participants aged 17-70 (N target  = 552,663; 64% female; M age  = 34.19 years). Study 1 showed that shyness measured as a composite of introversion and neuroticism was highly correlated with other constructs involving social fears. Study 2 indicated that, controlling for various sociodemographic variables, females appeared to have higher levels, whereas males appeared to have lower levels of shyness. Males and females who were in employment had the lowest shyness levels, whereas those working in unskilled jobs had the highest levels and people working in sales the lowest levels of shyness. Participants in relationships had lower levels of shyness than those not in relationships, but parenthood was not associated with shyness. Mean levels of shyness are likely to vary according to adopted social roles, gender, and age. © 2016 Wiley Periodicals, Inc.

  5. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  6. Toward Rapid Unattended X-ray Tomography of Large Planar Samples at 50-nm Resolution

    International Nuclear Information System (INIS)

    Rudati, J.; Tkachuk, A.; Gelb, J.; Hsu, G.; Feng, Y.; Pastrick, R.; Lyon, A.; Trapp, D.; Beetz, T.; Chen, S.; Hornberger, B.; Seshadri, S.; Kamath, S.; Zeng, X.; Feser, M.; Yun, W.; Pianetta, P.; Andrews, J.; Brennan, S.; Chu, Y. S.

    2009-01-01

    X-ray tomography at sub-50 nm resolution of small areas (∼15 μmx15 μm) are routinely performed with both laboratory and synchrotron sources. Optics and detectors for laboratory systems have been optimized to approach the theoretical efficiency limit. Limited by the availability of relatively low-brightness laboratory X-ray sources, exposure times for 3-D data sets at 50 nm resolution are still many hours up to a full day. However, for bright synchrotron sources, the use of these optimized imaging systems results in extremely short exposure times, approaching live-camera speeds at the Advanced Photon Source at Argonne National Laboratory near Chicago in the US These speeds make it possible to acquire a full tomographic dataset at 50 nm resolution in less than a minute of true X-ray exposure time. However, limits in the control and positioning system lead to large overhead that results in typical exposure times of ∼15 min currently.We present our work on the reduction and elimination of system overhead and toward complete automation of the data acquisition process. The enhancements underway are primarily to boost the scanning rate, sample positioning speed, and illumination homogeneity to performance levels necessary for unattended tomography of large areas (many mm 2 in size). We present first results on this ongoing project.

  7. Extreme value statistics and thermodynamics of earthquakes: large earthquakes

    Directory of Open Access Journals (Sweden)

    B. H. Lavenda

    2000-06-01

    Full Text Available A compound Poisson process is used to derive a new shape parameter which can be used to discriminate between large earthquakes and aftershock sequences. Sample exceedance distributions of large earthquakes are fitted to the Pareto tail and the actual distribution of the maximum to the Fréchet distribution, while the sample distribution of aftershocks are fitted to a Beta distribution and the distribution of the minimum to the Weibull distribution for the smallest value. The transition between initial sample distributions and asymptotic extreme value distributions shows that self-similar power laws are transformed into nonscaling exponential distributions so that neither self-similarity nor the Gutenberg-Richter law can be considered universal. The energy-magnitude transformation converts the Fréchet distribution into the Gumbel distribution, originally proposed by Epstein and Lomnitz, and not the Gompertz distribution as in the Lomnitz-Adler and Lomnitz generalization of the Gutenberg-Richter law. Numerical comparison is made with the Lomnitz-Adler and Lomnitz analysis using the same Catalogue of Chinese Earthquakes. An analogy is drawn between large earthquakes and high energy particle physics. A generalized equation of state is used to transform the Gamma density into the order-statistic Fréchet distribution. Earthquaketemperature and volume are determined as functions of the energy. Large insurance claims based on the Pareto distribution, which does not have a right endpoint, show why there cannot be a maximum earthquake energy.

  8. A simulative comparison of respondent driven sampling with incentivized snowball sampling – the “strudel effect”

    Science.gov (United States)

    Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.

    2014-01-01

    Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650

  9. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    Science.gov (United States)

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  10. An examination of the RCMAS-2 scores across gender, ethnic background, and age in a large Asian school sample.

    Science.gov (United States)

    Ang, Rebecca P; Lowe, Patricia A; Yusof, Noradlin

    2011-12-01

    The present study investigated the factor structure, reliability, convergent and discriminant validity, and U.S. norms of the Revised Children's Manifest Anxiety Scale, Second Edition (RCMAS-2; C. R. Reynolds & B. O. Richmond, 2008a) scores in a Singapore sample of 1,618 school-age children and adolescents. Although there were small statistically significant differences in the average RCMAS-2 T scores found across various demographic groupings, on the whole, the U.S. norms appear adequate for use in the Asian Singapore sample. Results from item bias analyses suggested that biased items detected had small effects and were counterbalanced across gender and ethnicity, and hence, their relative impact on test score variation appears to be minimal. Results of factor analyses on the RCMAS-2 scores supported the presence of a large general anxiety factor, the Total Anxiety factor, and the 5-factor structure found in U.S. samples was replicated. Both the large general anxiety factor and the 5-factor solution were invariant across gender and ethnic background. Internal consistency estimates ranged from adequate to good, and 2-week test-retest reliability estimates were comparable to previous studies. Evidence providing support for convergent and discriminant validity of the RCMAS-2 scores was also found. Taken together, findings provide additional cross-cultural evidence of the appropriateness and usefulness of the RCMAS-2 as a measure of anxiety in Asian Singaporean school-age children and adolescents.

  11. Investigating sex differences in psychological predictors of snack intake among a large representative sample.

    Science.gov (United States)

    Adriaanse, Marieke A; Evers, Catharine; Verhoeven, Aukje A C; de Ridder, Denise T D

    2016-03-01

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of psychological eating-related variables. A community sample was employed to: (i) determine sex differences in (un)healthy snack consumption and psychological eating-related variables (e.g. emotional eating, intention to eat healthily); (ii) examine whether sex predicts energy intake from (un)healthy snacks over and above psychological variables; and (iii) investigate the relationship between psychological variables and snack intake for men and women separately. Snack consumption was assessed with a 7d snack diary; the psychological eating-related variables with questionnaires. Participants were members of an Internet survey panel that is based on a true probability sample of households in the Netherlands. Men and women (n 1292; 45 % male), with a mean age of 51·23 (sd 16·78) years and a mean BMI of 25·62 (sd 4·75) kg/m2. Results revealed that women consumed more healthy and less unhealthy snacks than men and they scored higher than men on emotional and restrained eating. Women also more often reported appearance and health-related concerns about their eating behaviour, but men and women did not differ with regard to external eating or their intentions to eat more healthily. The relationships between psychological eating-related variables and snack intake were similar for men and women, indicating that snack intake is predicted by the same variables for men and women. It is concluded that some small sex differences in psychological eating-related variables exist, but based on the present data there is no need for interventions aimed at promoting healthy eating to target different predictors according to sex.

  12. Large contribution of human papillomavirus in vaginal neoplastic lesions: a worldwide study in 597 samples.

    Science.gov (United States)

    Alemany, L; Saunier, M; Tinoco, L; Quirós, B; Alvarado-Cabrero, I; Alejo, M; Joura, E A; Maldonado, P; Klaustermeier, J; Salmerón, J; Bergeron, C; Petry, K U; Guimerà, N; Clavero, O; Murillo, R; Clavel, C; Wain, V; Geraets, D T; Jach, R; Cross, P; Carrilho, C; Molina, C; Shin, H R; Mandys, V; Nowakowski, A M; Vidal, A; Lombardi, L; Kitchener, H; Sica, A R; Magaña-León, C; Pawlita, M; Quint, W; Bravo, I G; Muñoz, N; de Sanjosé, S; Bosch, F X

    2014-11-01

    This work describes the human papillomavirus (HPV) prevalence and the HPV type distribution in a large series of vaginal intraepithelial neoplasia (VAIN) grades 2/3 and vaginal cancer worldwide. We analysed 189 VAIN 2/3 and 408 invasive vaginal cancer cases collected from 31 countries from 1986 to 2011. After histopathological evaluation of sectioned formalin-fixed paraffin-embedded samples, HPV DNA detection and typing was performed using the SPF-10/DNA enzyme immunoassay (DEIA)/LiPA25 system (version 1). A subset of 146 vaginal cancers was tested for p16(INK4a) expression, a cellular surrogate marker for HPV transformation. Prevalence ratios were estimated using multivariate Poisson regression with robust variance. HPV DNA was detected in 74% (95% confidence interval (CI): 70-78%) of invasive cancers and in 96% (95% CI: 92-98%) of VAIN 2/3. Among cancers, the highest detection rates were observed in warty-basaloid subtype of squamous cell carcinomas, and in younger ages. Concerning the type-specific distribution, HPV16 was the most frequently type detected in both precancerous and cancerous lesions (59%). p16(INK4a) overexpression was found in 87% of HPV DNA positive vaginal cancer cases. HPV was identified in a large proportion of invasive vaginal cancers and in almost all VAIN 2/3. HPV16 was the most common type detected. A large impact in the reduction of the burden of vaginal neoplastic lesions is expected among vaccinated cohorts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. 99Mo Yield Using Large Sample Mass of MoO3 for Sustainable Production of 99Mo

    Science.gov (United States)

    Tsukada, Kazuaki; Nagai, Yasuki; Hashimoto, Kazuyuki; Kawabata, Masako; Minato, Futoshi; Saeki, Hideya; Motoishi, Shoji; Itoh, Masatoshi

    2018-04-01

    A neutron source from the C(d,n) reaction has the unique capability of producing medical radioisotopes such as 99Mo with a minimum level of radioactive waste. Precise data on the neutron flux are crucial to determine the best conditions for obtaining the maximum yield of 99Mo. The measured yield of 99Mo produced by the 100Mo(n,2n)99Mo reaction from a large sample mass of MoO3 agrees well with the numerical result estimated with the latest neutron data, which are a factor of two larger than the other existing data. This result establishes an important finding for the domestic production of 99Mo: approximately 50% of the demand for 99Mo in Japan could be met using a 100 g 100MoO3 sample mass with a single accelerator of 40 MeV, 2 mA deuteron beams.

  14. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  15. The ESO Diffuse Interstellar Band Large Exploration Survey (EDIBLES)

    Science.gov (United States)

    Cami, J.; Cox, N. L.; Farhang, A.; Smoker, J.; Elyajouri, M.; Lallement, R.; Bacalla, X.; Bhatt, N. H.; Bron, E.; Cordiner, M. A.; de Koter, A..; Ehrenfreund, P.; Evans, C.; Foing, B. H.; Javadi, A.; Joblin, C.; Kaper, L.; Khosroshahi, H. G.; Laverick, M.; Le Petit, F..; Linnartz, H.; Marshall, C. C.; Monreal-Ibero, A.; Mulas, G.; Roueff, E.; Royer, P.; Salama, F.; Sarre, P. J.; Smith, K. T.; Spaans, M.; van Loon, J. T..; Wade, G.

    2018-03-01

    The ESO Diffuse Interstellar Band Large Exploration Survey (EDIBLES) is a Large Programme that is collecting high-signal-to-noise (S/N) spectra with UVES of a large sample of O and B-type stars covering a large spectral range. The goal of the programme is to extract a unique sample of high-quality interstellar spectra from these data, representing different physical and chemical environments, and to characterise these environments in great detail. An important component of interstellar spectra is the diffuse interstellar bands (DIBs), a set of hundreds of unidentified interstellar absorption lines. With the detailed line-of-sight information and the high-quality spectra, EDIBLES will derive strong constraints on the potential DIB carrier molecules. EDIBLES will thus guide the laboratory experiments necessary to identify these interstellar “mystery molecules”, and turn DIBs into powerful diagnostics of their environments in our Milky Way Galaxy and beyond. We present some preliminary results showing the unique capabilities of the EDIBLES programme.

  16. Dynamics of acoustically levitated disk samples.

    Science.gov (United States)

    Xie, W J; Wei, B

    2004-10-01

    The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King's theory, and a larger force can be obtained for thin disks. When the disk aspect ratio gamma is larger than a critical value gamma(*) ( approximately 1.9 ) and the disk radius a is smaller than the critical value a(*) (gamma) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples ( gammaacoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval H(n) . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.

  17. Sample contamination with NMP-oxidation products and byproduct-free NMP removal from sample solutions

    Energy Technology Data Exchange (ETDEWEB)

    Cesar Berrueco; Patricia Alvarez; Silvia Venditti; Trevor J. Morgan; Alan A. Herod; Marcos Millan; Rafael Kandiyoti [Imperial College London, London (United Kingdom). Department of Chemical Engineering

    2009-05-15

    1-Methyl-2-pyrrolidinone (NMP) is widely used as a solvent for coal-derived products and as eluent in size exclusion chromatography. It was observed that sample contamination may take place, through reactions of NMP, during extraction under refluxing conditions and during the process of NMP evaporation to concentrate or isolate samples. In this work, product distributions from experiments carried out in contact with air and under a blanket of oxygen-free nitrogen have been compared. Gas chromatography/mass spectrometry (GC-MS) clearly shows that oxidation products form when NMP is heated in the presence of air. Upon further heating, these oxidation products appear to polymerize, forming material with large molecular masses. Potentially severe levels of interference have been encountered in the size exclusion chromatography (SEC) of actual samples. Laser desorption mass spectrometry and SEC agree in showing an upper mass limit of nearly 7000 u for a residue left after distilling 'pure' NMP in contact with air. Furthermore, experiments have shown that these effects could be completely avoided by a strict exclusion of air during the refluxing and evaporation of NMP to dryness. 45 refs., 13 figs.

  18. Validation of the MOS Social Support Survey 6-item (MOS-SSS-6) measure with two large population-based samples of Australian women.

    Science.gov (United States)

    Holden, Libby; Lee, Christina; Hockey, Richard; Ware, Robert S; Dobson, Annette J

    2014-12-01

    This study aimed to validate a 6-item 1-factor global measure of social support developed from the Medical Outcomes Study Social Support Survey (MOS-SSS) for use in large epidemiological studies. Data were obtained from two large population-based samples of participants in the Australian Longitudinal Study on Women's Health. The two cohorts were aged 53-58 and 28-33 years at data collection (N = 10,616 and 8,977, respectively). Items selected for the 6-item 1-factor measure were derived from the factor structure obtained from unpublished work using an earlier wave of data from one of these cohorts. Descriptive statistics, including polychoric correlations, were used to describe the abbreviated scale. Cronbach's alpha was used to assess internal consistency and confirmatory factor analysis to assess scale validity. Concurrent validity was assessed using correlations between the new 6-item version and established 19-item version, and other concurrent variables. In both cohorts, the new 6-item 1-factor measure showed strong internal consistency and scale reliability. It had excellent goodness-of-fit indices, similar to those of the established 19-item measure. Both versions correlated similarly with concurrent measures. The 6-item 1-factor MOS-SSS measures global functional social support with fewer items than the established 19-item measure.

  19. Sample Selection for Training Cascade Detectors.

    Science.gov (United States)

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  20. Sleep habits, insomnia, and daytime sleepiness in a large and healthy community-based sample of New Zealanders.

    Science.gov (United States)

    Wilsmore, Bradley R; Grunstein, Ronald R; Fransen, Marlene; Woodward, Mark; Norton, Robyn; Ameratunga, Shanthi

    2013-06-15

    To determine the relationship between sleep complaints, primary insomnia, excessive daytime sleepiness, and lifestyle factors in a large community-based sample. Cross-sectional study. Blood donor sites in New Zealand. 22,389 individuals aged 16-84 years volunteering to donate blood. N/A. A comprehensive self-administered questionnaire including personal demographics and validated questions assessing sleep disorders (snoring, apnea), sleep complaints (sleep quantity, sleep dissatisfaction), insomnia symptoms, excessive daytime sleepiness, mood, and lifestyle factors such as work patterns, smoking, alcohol, and illicit substance use. Additionally, direct measurements of height and weight were obtained. One in three participants report healthy sample) was associated with insomnia (odds ratio [OR] 1.75, 95% confidence interval [CI] 1.50 to 2.05), depression (OR 2.01, CI 1.74 to 2.32), and sleep disordered breathing (OR 1.92, CI 1.59 to 2.32). Long work hours, alcohol dependence, and rotating work shifts also increase the risk of daytime sleepiness. Even in this relatively young, healthy, non-clinical sample, sleep complaints and primary insomnia with subsequent excess daytime sleepiness were common. There were clear associations between many personal and lifestyle factors-such as depression, long work hours, alcohol dependence, and rotating shift work-and sleep problems or excessive daytime sleepiness.

  1. Next generation sensing platforms for extended deployments in large-scale, multidisciplinary, adaptive sampling and observational networks

    Science.gov (United States)

    Cross, J. N.; Meinig, C.; Mordy, C. W.; Lawrence-Slavas, N.; Cokelet, E. D.; Jenkins, R.; Tabisola, H. M.; Stabeno, P. J.

    2016-12-01

    New autonomous sensors have dramatically increased the resolution and accuracy of oceanographic data collection, enabling rapid sampling over extremely fine scales. Innovative new autonomous platofrms like floats, gliders, drones, and crawling moorings leverage the full potential of these new sensors by extending spatiotemporal reach across varied environments. During 2015 and 2016, The Innovative Technology for Arctic Exploration Program at the Pacific Marine Environmental Laboratory tested several new types of fully autonomous platforms with increased speed, durability, and power and payload capacity designed to deliver cutting-edge ecosystem assessment sensors to remote or inaccessible environments. The Expendable Ice-Tracking (EXIT) gloat developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) is moored near bottom during the ice-free season and released on an autonomous timer beneath the ice during the following winter. The float collects a rapid profile during ascent, and continues to collect critical, poorly-accessible under-ice data until melt, when data is transmitted via satellite. The autonomous Oculus sub-surface glider developed by the University of Washington and PMEL has a large power and payload capacity and an enhanced buoyancy engine. This 'coastal truck' is designed for the rapid water column ascent required by optical imaging systems. The Saildrone is a solar and wind powered ocean unmanned surface vessel (USV) developed by Saildrone, Inc. in partnership with PMEL. This large-payload (200 lbs), fast (1-7 kts), durable (46 kts winds) platform was equipped with 15 sensors designed for ecosystem assessment during 2016, including passive and active acoustic systems specially redesigned for autonomous vehicle deployments. The senors deployed on these platforms achieved rigorous accuracy and precision standards. These innovative platforms provide new sampling capabilities and cost efficiencies in high-resolution sensor deployment

  2. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  3. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Science.gov (United States)

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be

  4. Examining gray matter structures associated with individual differences in global life satisfaction in a large sample of young adults

    Science.gov (United States)

    Kong, Feng; Ding, Ke; Yang, Zetian; Dang, Xiaobin; Hu, Siyuan; Song, Yiying

    2015-01-01

    Although much attention has been directed towards life satisfaction that refers to an individual’s general cognitive evaluations of his or her life as a whole, little is known about the neural basis underlying global life satisfaction. In this study, we used voxel-based morphometry to investigate the structural neural correlates of life satisfaction in a large sample of young healthy adults (n = 299). We showed that individuals’ life satisfaction was positively correlated with the regional gray matter volume (rGMV) in the right parahippocampal gyrus (PHG), and negatively correlated with the rGMV in the left precuneus and left ventromedial prefrontal cortex. This pattern of results remained significant even after controlling for the effect of general positive and negative affect, suggesting a unique structural correlates of life satisfaction. Furthermore, we found that self-esteem partially mediated the association between the PHG volume and life satisfaction as well as that between the precuneus volume and global life satisfaction. Taken together, we provide the first evidence for the structural neural basis of life satisfaction, and highlight that self-esteem might play a crucial role in cultivating an individual’s life satisfaction. PMID:25406366

  5. Neutron activation analysis of archaeological artifacts using the conventional relative method: a realistic approach for analysis of large samples

    International Nuclear Information System (INIS)

    Bedregal, P.S.; Mendoza, A.; Montoya, E.H.; Cohen, I.M.; Universidad Tecnologica Nacional, Buenos Aires; Oscar Baltuano

    2012-01-01

    A new approach for analysis of entire potsherds of archaeological interest by INAA, using the conventional relative method, is described. The analytical method proposed involves, primarily, the preparation of replicates of the original archaeological pottery, with well known chemical composition (standard), destined to be irradiated simultaneously, in a well thermalized external neutron beam of the RP-10 reactor, with the original object (sample). The basic advantage of this proposal is to avoid the need of performing complicated effect corrections when dealing with large samples, due to neutron self shielding, neutron self-thermalization and gamma ray attenuation. In addition, and in contrast with the other methods, the main advantages are the possibility of evaluating the uncertainty of the results and, fundamentally, validating the overall methodology. (author)

  6. Analysis of the research sample collections of Uppsala biobank.

    Science.gov (United States)

    Engelmark, Malin T; Beskow, Anna H

    2014-10-01

    Uppsala Biobank is the joint and only biobank organization of the two principals, Uppsala University and Uppsala University Hospital. Biobanks are required to have updated registries on sample collection composition and management in order to fulfill legal regulations. We report here the results from the first comprehensive and overall analysis of the 131 research sample collections organized in the biobank. The results show that the median of the number of samples in the collections was 700 and that the number of samples varied from less than 500 to over one million. Blood samples, such as whole blood, serum, and plasma, were included in the vast majority, 84.0%, of the research sample collections. Also, as much as 95.5% of the newly collected samples within healthcare included blood samples, which further supports the concept that blood samples have fundamental importance for medical research. Tissue samples were also commonly used and occurred in 39.7% of the research sample collections, often combined with other types of samples. In total, 96.9% of the 131 sample collections included samples collected for healthcare, showing the importance of healthcare as a research infrastructure. Of the collections that had accessed existing samples from healthcare, as much as 96.3% included tissue samples from the Department of Pathology, which shows the importance of pathology samples as a resource for medical research. Analysis of different research areas shows that the most common of known public health diseases are covered. Collections that had generated the most publications, up to over 300, contained a large number of samples collected systematically and repeatedly over many years. More knowledge about existing biobank materials, together with public registries on sample collections, will support research collaborations, improve transparency, and bring us closer to the goals of biobanks, which is to save and prolong human lives and improve health and quality of life.

  7. Personality traits and eating habits in a large sample of Estonians.

    Science.gov (United States)

    Mõttus, René; Realo, Anu; Allik, Jüri; Deary, Ian J; Esko, Tõnu; Metspalu, Andres

    2012-11-01

    Diet has health consequences, which makes knowing the psychological correlates of dietary habits important. Associations between dietary habits and personality traits were examined in a large sample of Estonians (N = 1,691) aged between 18 and 89 years. Dietary habits were measured using 11 items, which grouped into two factors reflecting (a) health aware and (b) traditional dietary patterns. The health aware diet factor was defined by eating more cereal and dairy products, fish, vegetables and fruits. The traditional diet factor was defined by eating more potatoes, meat and meat products, and bread. Personality was assessed by participants themselves and by people who knew them well. The questionnaire used was the NEO Personality Inventory-3, which measures the Five-Factor Model personality broad traits of Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness, along with six facets for each trait. Gender, age and educational level were controlled for. Higher scores on the health aware diet factor were associated with lower Neuroticism, and higher Extraversion, Openness and Conscientiousness (effect sizes were modest: r = .11 to 0.17 in self-ratings, and r = .08 to 0.11 in informant-ratings, ps < 0.01 or lower). Higher scores on the traditional diet factor were related to lower levels of Openness (r = -0.14 and -0.13, p < .001, self- and informant-ratings, respectively). Endorsement of healthy and avoidance of traditional dietary items are associated with people's personality trait levels, especially higher Openness. The results may inform dietary interventions with respect to possible barriers to diet change.

  8. Diversity in the stellar velocity dispersion profiles of a large sample of brightest cluster galaxies z ≤ 0.3

    Science.gov (United States)

    Loubser, S. I.; Hoekstra, H.; Babul, A.; O'Sullivan, E.

    2018-06-01

    We analyse spatially resolved deep optical spectroscopy of brightestcluster galaxies (BCGs) located in 32 massive clusters with redshifts of 0.05 ≤ z ≤ 0.30 to investigate their velocity dispersion profiles. We compare these measurements to those of other massive early-type galaxies, as well as central group galaxies, where relevant. This unique, large sample extends to the most extreme of massive galaxies, spanning MK between -25.7 and -27.8 mag, and host cluster halo mass M500 up to 1.7 × 1015 M⊙. To compare the kinematic properties between brightest group and cluster members, we analyse similar spatially resolved long-slit spectroscopy for 23 nearby brightest group galaxies (BGGs) from the Complete Local-Volume Groups Sample. We find a surprisingly large variety in velocity dispersion slopes for BCGs, with a significantly larger fraction of positive slopes, unique compared to other (non-central) early-type galaxies as well as the majority of the brightest members of the groups. We find that the velocity dispersion slopes of the BCGs and BGGs correlate with the luminosity of the galaxies, and we quantify this correlation. It is not clear whether the full diversity in velocity dispersion slopes that we see is reproduced in simulations.

  9. Gravimetric dust sampling for control purposes and occupational dust sampling.

    CSIR Research Space (South Africa)

    Unsted, AD

    1997-02-01

    Full Text Available Prior to the introduction of gravimetric dust sampling, konimeters had been used for dust sampling, which was largely for control purposes. Whether or not absolute results were achievable was not an issue since relative results were used to evaluate...

  10. Self-Esteem Development across the Life Span: A Longitudinal Study with a Large Sample from Germany

    Science.gov (United States)

    Orth, Ulrich; Maes, Jürgen; Schmitt, Manfred

    2015-01-01

    The authors examined the development of self-esteem across the life span. Data came from a German longitudinal study with 3 assessments across 4 years of a sample of 2,509 individuals ages 14 to 89 years. The self-esteem measure used showed strong measurement invariance across assessments and birth cohorts. Latent growth curve analyses indicated…

  11. Effect of NaOH on large-volume sample stacking of haloacetic acids in capillary zone electrophoresis with a low-pH buffer.

    Science.gov (United States)

    Tu, Chuanhong; Zhu, Lingyan; Ang, Chay Hoon; Lee, Hian Kee

    2003-06-01

    Large-volume sample stacking (LVSS) is an effective on-capillary sample concentration method in capillary zone electrophoresis, which can be applied to the sample in a low-conductivity matrix. NaOH solution is commonly used to back-extract acidic compounds from organic solvent in sample pretreatment. The effect of NaOH as sample matrix on LVSS of haloacetic acids was investigated in this study. It was found that the presence of NaOH in sample did not compromise, but rather help the sample stacking performance if a low pH background electrolyte (BGE) was used. The sensitivity enhancement factor was higher than the case when sample was dissolved in pure water or diluted BGE. Compared with conventional injection (0.4% capillary volume), 97-120-fold sensitivity enhancement in terms of peak height was obtained without deterioration of separation with an injection amount equal to 20% of the capillary volume. This method was applied to determine haloacetic acids in tap water by combination with liquid-liquid extraction and back-extraction into NaOH solution. Limits of detection at sub-ppb levels were obtained for real samples with direct UV detection.

  12. Predicting sample lifetimes in creep fracture of heterogeneous materials

    Science.gov (United States)

    Koivisto, Juha; Ovaska, Markus; Miksic, Amandine; Laurson, Lasse; Alava, Mikko J.

    2016-08-01

    Materials flow—under creep or constant loads—and, finally, fail. The prediction of sample lifetimes is an important and highly challenging problem because of the inherently heterogeneous nature of most materials that results in large sample-to-sample lifetime fluctuations, even under the same conditions. We study creep deformation of paper sheets as one heterogeneous material and thus show how to predict lifetimes of individual samples by exploiting the "universal" features in the sample-inherent creep curves, particularly the passage to an accelerating creep rate. Using simulations of a viscoelastic fiber bundle model, we illustrate how deformation localization controls the shape of the creep curve and thus the degree of lifetime predictability.

  13. Soil Characterization by Large Scale Sampling of Soil Mixed with Buried Construction Debris at a Former Uranium Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Nardi, A.J.; Lamantia, L.

    2009-01-01

    Recent soil excavation activities on a site identified the presence of buried uranium contaminated building construction debris. The site previously was the location of a low enriched uranium fuel fabrication facility. This resulted in the collection of excavated materials from the two locations where contaminated subsurface debris was identified. The excavated material was temporarily stored in two piles on the site until a determination could be made as to the appropriate disposition of the material. Characterization of the excavated material was undertaken in a manner that involved the collection of large scale samples of the excavated material in 1 cubic meter Super Sacks. Twenty bags were filled with excavated material that consisted of the mixture of both the construction debris and the associated soil. In order to obtain information on the level of activity associated with the construction debris, ten additional bags were filled with construction debris that had been separated, to the extent possible, from the associated soil. Radiological surveys were conducted of the resulting bags of collected materials and the soil associated with the waste mixture. The 30 large samples, collected as bags, were counted using an In-Situ Object Counting System (ISOCS) unit to determine the average concentration of U-235 present in each bag. The soil fraction was sampled by the collection of 40 samples of soil for analysis in an on-site laboratory. A fraction of these samples were also sent to an off-site laboratory for additional analysis. This project provided the necessary soil characterization information to allow consideration of alternate options for disposition of the material. The identified contaminant was verified to be low enriched uranium. Concentrations of uranium in the waste were found to be lower than the calculated site specific derived concentration guideline levels (DCGLs) but higher than the NRC's screening values. The methods and results are presented

  14. Sample Selection for Training Cascade Detectors.

    Directory of Open Access Journals (Sweden)

    Noelia Vállez

    Full Text Available Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  15. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  16. Large area strain analysis using scanning transmission electron microscopy across multiple images

    International Nuclear Information System (INIS)

    Oni, A. A.; Sang, X.; LeBeau, J. M.; Raju, S. V.; Saxena, S.; Dumpala, S.; Broderick, S.; Rajan, K.; Kumar, A.; Sinnott, S.

    2015-01-01

    Here, we apply revolving scanning transmission electron microscopy to measure lattice strain across a sample using a single reference area. To do so, we remove image distortion introduced by sample drift, which usually restricts strain analysis to a single image. Overcoming this challenge, we show that it is possible to use strain reference areas elsewhere in the sample, thereby enabling reliable strain mapping across large areas. As a prototypical example, we determine the strain present within the microstructure of a Ni-based superalloy directly from atom column positions as well as geometric phase analysis. While maintaining atomic resolution, we quantify strain within nanoscale regions and demonstrate that large, unit-cell level strain fluctuations are present within the intermetallic phase

  17. Maternal bereavement and childhood asthma-analyses in two large samples of Swedish children.

    Directory of Open Access Journals (Sweden)

    Fang Fang

    Full Text Available Prenatal factors such as prenatal psychological stress might influence the development of childhood asthma.We assessed the association between maternal bereavement shortly before and during pregnancy, as a proxy for prenatal stress, and the risk of childhood asthma in the offspring, based on two samples of children 1-4 (n = 426,334 and 7-12 (n = 493,813 years assembled from the Swedish Medical Birth Register. Exposure was maternal bereavement of a close relative from one year before pregnancy to child birth. Asthma event was defined by a hospital contact for asthma or at least two dispenses of inhaled corticosteroids or montelukast. In the younger sample we calculated hazards ratios (HRs of a first-ever asthma event using Cox models and in the older sample odds ratio (ORs of an asthma attack during 12 months using logistic regression. Compared to unexposed boys, exposed boys seemed to have a weakly higher risk of first-ever asthma event at 1-4 years (HR: 1.09; 95% confidence interval [CI]: 0.98, 1.22 as well as an asthma attack during 12 months at 7-12 years (OR: 1.10; 95% CI: 0.96, 1.24. No association was suggested for girls. Boys exposed during the second trimester had a significantly higher risk of asthma event at 1-4 years (HR: 1.55; 95% CI: 1.19, 2.02 and asthma attack at 7-12 years if the bereavement was an older child (OR: 1.58; 95% CI: 1.11, 2.25. The associations tended to be stronger if the bereavement was due to a traumatic death compared to natural death, but the difference was not statistically significant.Our results showed some evidence for a positive association between prenatal stress and childhood asthma among boys but not girls.

  18. Global climate change: Mitigation opportunities high efficiency large chiller technology

    Energy Technology Data Exchange (ETDEWEB)

    Stanga, M.V.

    1997-12-31

    This paper, comprised of presentation viewgraphs, examines the impact of high efficiency large chiller technology on world electricity consumption and carbon dioxide emissions. Background data are summarized, and sample calculations are presented. Calculations show that presently available high energy efficiency chiller technology has the ability to substantially reduce energy consumption from large chillers. If this technology is widely implemented on a global basis, it could reduce carbon dioxide emissions by 65 million tons by 2010.

  19. Why large cells dominate estuarine phytoplankton

    Science.gov (United States)

    Cloern, James E.

    2018-01-01

    Surveys across the world oceans have shown that phytoplankton biomass and production are dominated by small cells (picoplankton) where nutrient concentrations are low, but large cells (microplankton) dominate when nutrient-rich deep water is mixed to the surface. I analyzed phytoplankton size structure in samples collected over 25 yr in San Francisco Bay, a nutrient-rich estuary. Biomass was dominated by large cells because their biomass selectively grew during blooms. Large-cell dominance appears to be a characteristic of ecosystems at the land–sea interface, and these places may therefore function as analogs to oceanic upwelling systems. Simulations with a size-structured NPZ model showed that runs of positive net growth rate persisted long enough for biomass of large, but not small, cells to accumulate. Model experiments showed that small cells would dominate in the absence of grazing, at lower nutrient concentrations, and at elevated (+5°C) temperatures. Underlying these results are two fundamental scaling laws: (1) large cells are grazed more slowly than small cells, and (2) grazing rate increases with temperature faster than growth rate. The model experiments suggest testable hypotheses about phytoplankton size structure at the land–sea interface: (1) anthropogenic nutrient enrichment increases cell size; (2) this response varies with temperature and only occurs at mid-high latitudes; (3) large-cell blooms can only develop when temperature is below a critical value, around 15°C; (4) cell size diminishes along temperature gradients from high to low latitudes; and (5) large-cell blooms will diminish or disappear where planetary warming increases temperature beyond their critical threshold.

  20. Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian

    2011-01-01

    The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...

  1. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  2. Risk Aversion in Game Shows

    DEFF Research Database (Denmark)

    Andersen, Steffen; Harrison, Glenn W.; Lau, Morten I.

    2008-01-01

    We review the use of behavior from television game shows to infer risk attitudes. These shows provide evidence when contestants are making decisions over very large stakes, and in a replicated, structured way. Inferences are generally confounded by the subjective assessment of skill in some games......, and the dynamic nature of the task in most games. We consider the game shows Card Sharks, Jeopardy!, Lingo, and finally Deal Or No Deal. We provide a detailed case study of the analyses of Deal Or No Deal, since it is suitable for inference about risk attitudes and has attracted considerable attention....

  3. A large-scale investigation of the quality of groundwater in six major districts of Central India during the 2010-2011 sampling campaign.

    Science.gov (United States)

    Khare, Peeyush

    2017-09-01

    This paper investigates the groundwater quality in six major districts of Madhya Pradesh in central India, namely, Balaghat, Chhindwara, Dhar, Jhabua, Mandla, and Seoni during the 2010-2011 sampling campaign, and discusses improvements made in the supplied water quality between the years 2011 and 2017. Groundwater is the main source of water for a combined rural population of over 7 million in these districts. Its contamination could have a huge impact on public health. We analyzed the data collected from a large-scale water sampling campaign carried out by the Public Health Engineering Department (PHED), Government of Madhya Pradesh between 2010 and 2011 during which all rural tube wells and dug wells were sampled in these six districts. Eight hundred thirty-one dug wells and 47,606 tube wells were sampled in total and were analyzed for turbidity, hardness, iron, nitrate, fluoride, chloride, and sulfate ion concentrations. Our study found water in 21 out of the 228 dug wells in Chhindwara district unfit for drinking due to fluoride contamination while all dug wells in Balaghat had fluoride within the permissible limit. Twenty-six of the 56 dug wells and 4825 of the 9390 tube wells in Dhar district exceeded the permissible limit for nitrate while 100% dug wells in Balaghat, Seoni, and Chhindwara had low levels of nitrate. Twenty-four of the 228 dug wells and 1669 of 6790 tube wells in Chhindwara had high iron concentration. The median pH value in both dug wells and tube wells varied between 6 and 8 in all six districts. Still, a significant number of tube wells exceeded a pH of 8.5 especially in Mandla and Seoni districts. In conclusion, this study shows that parts of inhabited rural Madhya Pradesh were potentially exposed to contaminated subsurface water during 2010-2011. The analysis has been correlated with rural health survey results wherever available to estimate the visible impact. We next highlight that the quality of drinking water has enormously improved

  4. The significance of Sampling Design on Inference: An Analysis of Binary Outcome Model of Children’s Schooling Using Indonesian Large Multi-stage Sampling Data

    OpenAIRE

    Ekki Syamsulhakim

    2008-01-01

    This paper aims to exercise a rather recent trend in applied microeconometrics, namely the effect of sampling design on statistical inference, especially on binary outcome model. Many theoretical research in econometrics have shown the inappropriateness of applying i.i.dassumed statistical analysis on non-i.i.d data. These research have provided proofs showing that applying the iid-assumed analysis on a non-iid observations would result in an inflated standard errors which could make the esti...

  5. Examining gray matter structures associated with individual differences in global life satisfaction in a large sample of young adults.

    Science.gov (United States)

    Kong, Feng; Ding, Ke; Yang, Zetian; Dang, Xiaobin; Hu, Siyuan; Song, Yiying; Liu, Jia

    2015-07-01

    Although much attention has been directed towards life satisfaction that refers to an individual's general cognitive evaluations of his or her life as a whole, little is known about the neural basis underlying global life satisfaction. In this study, we used voxel-based morphometry to investigate the structural neural correlates of life satisfaction in a large sample of young healthy adults (n = 299). We showed that individuals' life satisfaction was positively correlated with the regional gray matter volume (rGMV) in the right parahippocampal gyrus (PHG), and negatively correlated with the rGMV in the left precuneus and left ventromedial prefrontal cortex. This pattern of results remained significant even after controlling for the effect of general positive and negative affect, suggesting a unique structural correlates of life satisfaction. Furthermore, we found that self-esteem partially mediated the association between the PHG volume and life satisfaction as well as that between the precuneus volume and global life satisfaction. Taken together, we provide the first evidence for the structural neural basis of life satisfaction, and highlight that self-esteem might play a crucial role in cultivating an individual's life satisfaction. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  6. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Directory of Open Access Journals (Sweden)

    Sebastian Höhna

    Full Text Available Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family and thus to maximize diversity (diversified sampling. So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa. The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa. Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model. Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species. All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear

  7. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Science.gov (United States)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  8. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Directory of Open Access Journals (Sweden)

    Cina Aghamohammadi

    2018-02-01

    Full Text Available We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r, for any large real number r. Then, for a sequence of processes each labeled by an integer size N, we compare how the classical and quantum required memories scale with N. In this setting, since both memories can diverge as N→∞, the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N→∞, but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  9. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    Science.gov (United States)

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling

  10. The effect of sample preparation methods on glass performance

    International Nuclear Information System (INIS)

    Oh, M.S.; Oversby, V.M.

    1990-01-01

    A series of experiments was conducted using SRL 165 synthetic waste glass to investigate the effects of surface preparation and leaching solution composition on the alteration of the glass. Samples of glass with as-cast surfaces produced smooth reaction layers and some evidence for precipitation of secondary phases from solution. Secondary phases were more abundant in samples reacted in deionized water than for those reacted in a silicate solution. Samples with saw-cut surfaces showed a large reduction in surface roughness after 7 days of reaction in either solution. Reaction in silicate solution for up to 91 days produced no further change in surface morphology, while reaction in DIW produced a spongy surface that formed the substrate for further surface layer development. The differences in the surface morphology of the samples may create microclimates that control the details of development of alteration layers on the glass; however, the concentrations of elements in leaching solutions show differences of 50% or less between samples prepared with different surface conditions for tests of a few months duration. 6 refs., 7 figs., 1 tab

  11. Simulated tempering distributed replica sampling: A practical guide to enhanced conformational sampling

    Energy Technology Data Exchange (ETDEWEB)

    Rauscher, Sarah; Pomes, Regis, E-mail: pomes@sickkids.ca

    2010-11-01

    Simulated tempering distributed replica sampling (STDR) is a generalized-ensemble method designed specifically for simulations of large molecular systems on shared and heterogeneous computing platforms [Rauscher, Neale and Pomes (2009) J. Chem. Theor. Comput. 5, 2640]. The STDR algorithm consists of an alternation of two steps: (1) a short molecular dynamics (MD) simulation; and (2) a stochastic temperature jump. Repeating these steps thousands of times results in a random walk in temperature, which allows the system to overcome energetic barriers, thereby enhancing conformational sampling. The aim of the present paper is to provide a practical guide to applying STDR to complex biomolecular systems. We discuss the details of our STDR implementation, which is a highly-parallel algorithm designed to maximize computational efficiency while simultaneously minimizing network communication and data storage requirements. Using a 35-residue disordered peptide in explicit water as a test system, we characterize the efficiency of the STDR algorithm with respect to both diffusion in temperature space and statistical convergence of structural properties. Importantly, we show that STDR provides a dramatic enhancement of conformational sampling compared to a canonical MD simulation.

  12. Circumpolar assessment of rhizosphere priming shows limited increase in carbon loss estimates for permafrost soils but large regional variability

    Science.gov (United States)

    Wild, B.; Keuper, F.; Kummu, M.; Beer, C.; Blume-Werry, G.; Fontaine, S.; Gavazov, K.; Gentsch, N.; Guggenberger, G.; Hugelius, G.; Jalava, M.; Koven, C.; Krab, E. J.; Kuhry, P.; Monteux, S.; Richter, A.; Shazhad, T.; Dorrepaal, E.

    2017-12-01

    Predictions of soil organic carbon (SOC) losses in the northern circumpolar permafrost area converge around 15% (± 3% standard error) of the initial C pool by 2100 under the RCP 8.5 warming scenario. Yet, none of these estimates consider plant-soil interactions such as the rhizosphere priming effect (RPE). While laboratory experiments have shown that the input of plant-derived compounds can stimulate SOC losses by up to 1200%, the magnitude of RPE in natural ecosystems is unknown and no methods for upscaling exist so far. We here present the first spatial and depth explicit RPE model that allows estimates of RPE on a large scale (PrimeSCale). We combine available spatial data (SOC, C/N, GPP, ALT and ecosystem type) and new ecological insights to assess the importance of the RPE at the circumpolar scale. We use a positive saturating relationship between the RPE and belowground C allocation and two ALT-dependent rooting-depth distribution functions (for tundra and boreal forest) to proportionally assign belowground C allocation and RPE to individual soil depth increments. The model permits to take into account reasonable limiting factors on additional SOC losses by RPE including interactions between spatial and/or depth variation in GPP, plant root density, SOC stocks and ALT. We estimate potential RPE-induced SOC losses at 9.7 Pg C (5 - 95% CI: 1.5 - 23.2 Pg C) by 2100 (RCP 8.5). This corresponds to an increase of the current permafrost SOC-loss estimate from 15% of the initial C pool to about 16%. If we apply an additional molar C/N threshold of 20 to account for microbial C limitation as a requirement for the RPE, SOC losses by RPE are further reduced to 6.5 Pg C (5 - 95% CI: 1.0 - 16.8 Pg C) by 2100 (RCP 8.5). Although our results show that current estimates of permafrost soil C losses are robust without taking into account the RPE, our model also highlights high-RPE risk in Siberian lowland areas and Alaska north of the Brooks Range. The small overall impact of

  13. Large-scale prospective T cell function assays in shipped, unfrozen blood samples

    DEFF Research Database (Denmark)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J

    2014-01-01

    , for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within...... cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities...... North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T...

  14. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  15. A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia

    DEFF Research Database (Denmark)

    Ingason, Andrés; Giegling, Ina; Cichon, Sven

    2010-01-01

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample....... Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both...... as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia....

  16. Comparison of Health Risks and Changes in Risks over Time Among a Sample of Lesbian, Gay, Bisexual, and Heterosexual Employees at a Large Firm.

    Science.gov (United States)

    Mitchell, Rebecca J; Ozminkowski, Ronald J

    2017-04-01

    The objective of this study was to estimate the prevalence of health risk factors by sexual orientation over a 4-year period within a sample of employees from a large firm. Propensity score-weighted generalized linear regression models were used to estimate the proportion of employees at high risk for health problems in each year and over time, controlling for many factors. Analyses were conducted with 6 study samples based on sex and sexual orientation. Rates of smoking, stress, and certain other health risk factors were higher for lesbian, gay, and bisexual (LGB) employees compared with rates of these risks among straight employees. Lesbian, gay, and straight employees successfully reduced risk levels in many areas. Significant reductions were realized for the proportion at risk for high stress and low life satisfaction among gay and lesbian employees, and for the proportion of smokers among gay males. Comparing changes over time for sexual orientation groups versus other employee groups showed that improvements and reductions in risk levels for most health risk factors examined occurred at similar rates among individuals employed by this firm, regardless of sexual orientation. These results can help improve understanding of LGB health and provide information on where to focus workplace health promotion efforts to meet the health needs of LGB employees.

  17. Classical boson sampling algorithms with superior performance to near-term experiments

    Science.gov (United States)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  18. Method for Determination of Neptunium in Large-Sized Urine Samples Using Manganese Dioxide Coprecipitation and 242Pu as Yield Tracer

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2013-01-01

    A novel method for bioassay of large volumes of human urine samples using manganese dioxide coprecipitation for preconcentration was developed for rapid determination of 237Np. 242Pu was utilized as a nonisotopic tracer to monitor the chemical yield of 237Np. A sequential injection extraction chr...... and rapid analysis of neptunium contamination level for emergency preparedness....

  19. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    Science.gov (United States)

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  20. Post-traumatic stress syndrome in a large sample of older adults: determinants and quality of life.

    Science.gov (United States)

    Lamoureux-Lamarche, Catherine; Vasiliadis, Helen-Maria; Préville, Michel; Berbiche, Djamal

    2016-01-01

    The aims of this study are to assess in a sample of older adults consulting in primary care practices the determinants and quality of life associated with post-traumatic stress syndrome (PTSS). Data used came from a large sample of 1765 community-dwelling older adults who were waiting to receive health services in primary care clinics in the province of Quebec. PTSS was measured with the PTSS scale. Socio-demographic and clinical characteristics were used as potential determinants of PTSS. Quality of life was measured with the EuroQol-5D-3L (EQ-5D-3L) EQ-Visual Analog Scale and the Satisfaction With Your Life Scale. Multivariate logistic and linear regression models were used to study the presence of PTSS and different measures of health-related quality of life and quality of life as a function of study variables. The six-month prevalence of PTSS was 11.0%. PTSS was associated with age, marital status, number of chronic disorders and the presence of an anxiety disorder. PTSS was also associated with the EQ-5D-3L and the Satisfaction with Your Life Scale. PTSS is prevalent in patients consulting in primary care practices. Primary care physicians should be aware that PTSS is also associated with a decrease in quality of life, which can further negatively impact health status.

  1. Space-time relationship in continuously moving table method for large FOV peripheral contrast-enhanced magnetic resonance angiography

    International Nuclear Information System (INIS)

    Sabati, M; Lauzon, M L; Frayne, R

    2003-01-01

    Data acquisition using a continuously moving table approach is a method capable of generating large field-of-view (FOV) 3D MR angiograms. However, in order to obtain venous contamination-free contrast-enhanced (CE) MR angiograms in the lower limbs, one of the major challenges is to acquire all necessary k-space data during the restricted arterial phase of the contrast agent. Preliminary investigation on the space-time relationship of continuously acquired peripheral angiography is performed in this work. Deterministic and stochastic undersampled hybrid-space (x, k y , k z ) acquisitions are simulated for large FOV peripheral runoff studies. Initial results show the possibility of acquiring isotropic large FOV images of the entire peripheral vascular system. An optimal trade-off between the spatial and temporal sampling properties was found that produced a high-spatial resolution peripheral CE-MR angiogram. The deterministic sampling pattern was capable of reconstructing the global structure of the peripheral arterial tree and showed slightly better global quantitative results than stochastic patterns. Optimal stochastic sampling patterns, on the other hand, enhanced small vessels and had more favourable local quantitative results. These simulations demonstrate the complex spatial-temporal relationship when sampling large FOV peripheral runoff studies. They also suggest that more investigation is required to maximize image quality as a function of hybrid-space coverage, acquisition repetition time and sampling pattern parameters

  2. High-throughput genotyping assay for the large-scale genetic characterization of Cryptosporidium parasites from human and bovine samples.

    Science.gov (United States)

    Abal-Fabeiro, J L; Maside, X; Llovo, J; Bello, X; Torres, M; Treviño, M; Moldes, L; Muñoz, A; Carracedo, A; Bartolomé, C

    2014-04-01

    The epidemiological study of human cryptosporidiosis requires the characterization of species and subtypes involved in human disease in large sample collections. Molecular genotyping is costly and time-consuming, making the implementation of low-cost, highly efficient technologies increasingly necessary. Here, we designed a protocol based on MALDI-TOF mass spectrometry for the high-throughput genotyping of a panel of 55 single nucleotide variants (SNVs) selected as markers for the identification of common gp60 subtypes of four Cryptosporidium species that infect humans. The method was applied to a panel of 608 human and 63 bovine isolates and the results were compared with control samples typed by Sanger sequencing. The method allowed the identification of species in 610 specimens (90·9%) and gp60 subtype in 605 (90·2%). It displayed excellent performance, with sensitivity and specificity values of 87·3 and 98·0%, respectively. Up to nine genotypes from four different Cryptosporidium species (C. hominis, C. parvum, C. meleagridis and C. felis) were detected in humans; the most common ones were C. hominis subtype Ib, and C. parvum IIa (61·3 and 28·3%, respectively). 96·5% of the bovine samples were typed as IIa. The method performs as well as the widely used Sanger sequencing and is more cost-effective and less time consuming.

  3. CHRONICITY OF DEPRESSION AND MOLECULAR MARKERS IN A LARGE SAMPLE OF HAN CHINESE WOMEN.

    Science.gov (United States)

    Edwards, Alexis C; Aggen, Steven H; Cai, Na; Bigdeli, Tim B; Peterson, Roseann E; Docherty, Anna R; Webb, Bradley T; Bacanu, Silviu-Alin; Flint, Jonathan; Kendler, Kenneth S

    2016-04-25

    Major depressive disorder (MDD) has been associated with changes in mean telomere length and mitochondrial DNA (mtDNA) copy number. This study investigates if clinical features of MDD differentially impact these molecular markers. Data from a large, clinically ascertained sample of Han Chinese women with recurrent MDD were used to examine whether symptom presentation, severity, and comorbidity were related to salivary telomere length and/or mtDNA copy number (maximum N = 5,284 for both molecular and phenotypic data). Structural equation modeling revealed that duration of longest episode was positively associated with mtDNA copy number, while earlier age of onset of most severe episode and a history of dysthymia were associated with shorter telomeres. Other factors, such as symptom presentation, family history of depression, and other comorbid internalizing disorders, were not associated with these molecular markers. Chronicity of depressive symptoms is related to more pronounced telomere shortening and increased mtDNA copy number among individuals with a history of recurrent MDD. As these molecular markers have previously been implicated in physiological aging and morbidity, individuals who experience prolonged depressive symptoms are potentially at greater risk of adverse medical outcomes. © 2016 Wiley Periodicals, Inc.

  4. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Test sample handling apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    A test sample handling apparatus using automatic scintillation counting for gamma detection, for use in such fields as radioimmunoassay, is described. The apparatus automatically and continuously counts large numbers of samples rapidly and efficiently by the simultaneous counting of two samples. By means of sequential ordering of non-sequential counting data, it is possible to obtain precisely ordered data while utilizing sample carrier holders having a minimum length. (U.K.)

  6. Insights into a spatially embedded social network from a large-scale snowball sample

    Science.gov (United States)

    Illenberger, J.; Kowald, M.; Axhausen, K. W.; Nagel, K.

    2011-12-01

    Much research has been conducted to obtain insights into the basic laws governing human travel behaviour. While the traditional travel survey has been for a long time the main source of travel data, recent approaches to use GPS data, mobile phone data, or the circulation of bank notes as a proxy for human travel behaviour are promising. The present study proposes a further source of such proxy-data: the social network. We collect data using an innovative snowball sampling technique to obtain details on the structure of a leisure-contacts network. We analyse the network with respect to its topology, the individuals' characteristics, and its spatial structure. We further show that a multiplication of the functions describing the spatial distribution of leisure contacts and the frequency of physical contacts results in a trip distribution that is consistent with data from the Swiss travel survey.

  7. Statistical searches for microlensing events in large, non-uniformly sampled time-domain surveys: A test using palomar transient factory data

    Energy Technology Data Exchange (ETDEWEB)

    Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)

    2014-01-20

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  8. Fast Ordered Sampling of DNA Sequence Variants

    Directory of Open Access Journals (Sweden)

    Anthony J. Greenberg

    2018-05-01

    Full Text Available Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects.

  9. Fast Ordered Sampling of DNA Sequence Variants.

    Science.gov (United States)

    Greenberg, Anthony J

    2018-05-04

    Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects. Copyright © 2018 Greenberg.

  10. Dealing with trade-offs in destructive sampling designs for occupancy surveys.

    Directory of Open Access Journals (Sweden)

    Stefano Canessa

    Full Text Available Occupancy surveys should be designed to minimise false absences. This is commonly achieved by increasing replication or increasing the efficiency of surveys. In the case of destructive sampling designs, in which searches of individual microhabitats represent the repeat surveys, minimising false absences leads to an inherent trade-off. Surveyors can sample more low quality microhabitats, bearing the resultant financial costs and producing wider-spread impacts, or they can target high quality microhabitats were the focal species is more likely to be found and risk more severe impacts on local habitat quality. We show how this trade-off can be solved with a decision-theoretic approach, using the Millewa Skink Hemiergis millewae from southern Australia as a case study. Hemiergis millewae is an endangered reptile that is best detected using destructive sampling of grass hummocks. Within sites that were known to be occupied by H. millewae, logistic regression modelling revealed that lizards were more frequently detected in large hummocks. If this model is an accurate representation of the detection process, searching large hummocks is more efficient and requires less replication, but this strategy also entails destruction of the best microhabitats for the species. We developed an optimisation tool to calculate the minimum combination of the number and size of hummocks to search to achieve a given cumulative probability of detecting the species at a site, incorporating weights to reflect the sensitivity of the results to a surveyor's priorities. The optimisation showed that placing high weight on minimising volume necessitates impractical replication, whereas placing high weight on minimising replication requires searching very large hummocks which are less common and may be vital for H. millewae. While destructive sampling methods are sometimes necessary, surveyors must be conscious of the ecological impacts of these methods. This study provides a

  11. EFFECTS OF LONG-TERM ALENDRONATE TREATMENT ON A LARGE SAMPLE OF PEDIATRIC PATIENTS WITH OSTEOGENESIS IMPERFECTA.

    Science.gov (United States)

    Lv, Fang; Liu, Yi; Xu, Xiaojie; Wang, Jianyi; Ma, Doudou; Jiang, Yan; Wang, Ou; Xia, Weibo; Xing, Xiaoping; Yu, Wei; Li, Mei

    2016-12-01

    Osteogenesis imperfecta (OI) is a group of inherited diseases characterized by reduced bone mass, recurrent bone fractures, and progressive bone deformities. Here, we evaluate the efficacy and safety of long-term treatment with alendronate in a large sample of Chinese children and adolescents with OI. In this prospective study, a total of 91 children and adolescents with OI were included. The patients received 3 years' treatment with 70 mg alendronate weekly and 500 mg calcium daily. During the treatment, fracture incidence, bone mineral density (BMD), and serum levels of the bone turnover biomarkers (alkaline phosphatase [ALP] and cross-linked C-telopeptide of type I collagen [β-CTX]) were evaluated. Linear growth speed and parameters of safety were also measured. After 3 years of treatment, the mean annual fracture incidence decreased from 1.2 ± 0.8 to 0.2 ± 0.3 (Posteogenesis imperfecta PTH = parathyroid hormone.

  12. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  13. Characteristic Polynomials of Sample Covariance Matrices: The Non-Square Case

    OpenAIRE

    Kösters, Holger

    2009-01-01

    We consider the sample covariance matrices of large data matrices which have i.i.d. complex matrix entries and which are non-square in the sense that the difference between the number of rows and the number of columns tends to infinity. We show that the second-order correlation function of the characteristic polynomial of the sample covariance matrix is asymptotically given by the sine kernel in the bulk of the spectrum and by the Airy kernel at the edge of the spectrum. Similar results are g...

  14. Large-D gravity and low-D strings.

    Science.gov (United States)

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  15. Large abnormal peak on capillary zone electrophoresis due to contrast agent.

    Science.gov (United States)

    Wheeler, Rachel D; Zhang, Liqun; Sheldon, Joanna

    2017-01-01

    Background Some iodinated radio-contrast media absorb ultraviolet light and can therefore be detected by capillary zone electrophoresis. If seen, these peaks are typically small with 'quantifications' of below 5 g/L. Here, we describe the detection of a large peak on capillary zone electrophoresis that was due to the radio-contrast agent, Omnipaque™. Methods Serum from a patient was analysed by capillary zone electrophoresis, and the IgG, IgA, IgM and total protein concentrations were measured. The serum sample was further analysed by gel electrophoresis and immunofixation. Results Capillary zone electrophoresis results for the serum sample showed a large peak with a concentration high enough to warrant urgent investigation. However, careful interpretation alongside the serum immunoglobulin concentrations and total protein concentration showed that the abnormal peak was a pseudoparaprotein rather than a monoclonal immunoglobulin. This was confirmed by analysis with gel electrophoresis and also serum immunofixation. The patient had had a CT angiogram with the radio-contrast agent Omnipaque™; addition of Omnipaque™ to a normal serum sample gave a peak with comparable mobility to the pseudoparaprotein in the patient's serum. Conclusions Pseudoparaproteins can appear as a large band on capillary zone electrophoresis. This case highlights the importance of a laboratory process that detects significant electrophoretic abnormalities promptly and interprets them in the context of the immunoglobulin concentrations. This should avoid incorrect reporting of pseudoparaproteins which could result in the patient having unnecessary investigations.

  16. Explaining health care expenditure variation: large-sample evidence using linked survey and health administrative data.

    Science.gov (United States)

    Ellis, Randall P; Fiebig, Denzil G; Johar, Meliyanni; Jones, Glenn; Savage, Elizabeth

    2013-09-01

    Explaining individual, regional, and provider variation in health care spending is of enormous value to policymakers but is often hampered by the lack of individual level detail in universal public health systems because budgeted spending is often not attributable to specific individuals. Even rarer is self-reported survey information that helps explain this variation in large samples. In this paper, we link a cross-sectional survey of 267 188 Australians age 45 and over to a panel dataset of annual healthcare costs calculated from several years of hospital, medical and pharmaceutical records. We use this data to distinguish between cost variations due to health shocks and those that are intrinsic (fixed) to an individual over three years. We find that high fixed expenditures are positively associated with age, especially older males, poor health, obesity, smoking, cancer, stroke and heart conditions. Being foreign born, speaking a foreign language at home and low income are more strongly associated with higher time-varying expenditures, suggesting greater exposure to adverse health shocks. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  18. A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample

    Directory of Open Access Journals (Sweden)

    Yu-Yen Chang

    2012-01-01

    concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L-band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color (g-i is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color (i-J shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.

  19. Construct validity of the Groningen Frailty Indicator established in a large sample of home-dwelling elderly persons : Evidence of stability across age and gender

    NARCIS (Netherlands)

    Peters, L. L.; Boter, H.; Burgerhof, J. G. M.; Slaets, J. P. J.; Buskens, E.

    Background: The primary objective of the present study was to evaluate the validity of the Groningen frailty Indicator (GFI) in a sample of Dutch elderly persons participating in LifeLines, a large population-based cohort study. Additional aims were to assess differences between frail and non-frail

  20. Geographical affinities of the HapMap samples.

    Directory of Open Access Journals (Sweden)

    Miao He

    Full Text Available The HapMap samples were collected for medical-genetic studies, but are also widely used in population-genetic and evolutionary investigations. Yet the ascertainment of the samples differs from most population-genetic studies which collect individuals who live in the same local region as their ancestors. What effects could this non-standard ascertainment have on the interpretation of HapMap results?We compared the HapMap samples with more conventionally-ascertained samples used in population- and forensic-genetic studies, including the HGDP-CEPH panel, making use of published genome-wide autosomal SNP data and Y-STR haplotypes, as well as producing new Y-STR data. We found that the HapMap samples were representative of their broad geographical regions of ancestry according to all tests applied. The YRI and JPT were indistinguishable from independent samples of Yoruba and Japanese in all ways investigated. However, both the CHB and the CEU were distinguishable from all other HGDP-CEPH populations with autosomal markers, and both showed Y-STR similarities to unusually large numbers of populations, perhaps reflecting their admixed origins.The CHB and JPT are readily distinguished from one another with both autosomal and Y-chromosomal markers, and results obtained after combining them into a single sample should be interpreted with caution. The CEU are better described as being of Western European ancestry than of Northern European ancestry as often reported. Both the CHB and CEU show subtle but detectable signs of admixture. Thus the YRI and JPT samples are well-suited to standard population-genetic studies, but the CHB and CEU less so.

  1. Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws

    Science.gov (United States)

    Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.

    2009-04-01

    Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W

  2. A large sample of Kohonen-selected SDSS quasars with weak emission lines: selection effects and statistical properties

    Science.gov (United States)

    Meusinger, H.; Balafkan, N.

    2014-08-01

    Aims: A tiny fraction of the quasar population shows remarkably weak emission lines. Several hypotheses have been developed, but the weak line quasar (WLQ) phenomenon still remains puzzling. The aim of this study was to create a sizeable sample of WLQs and WLQ-like objects and to evaluate various properties of this sample. Methods: We performed a search for WLQs in the spectroscopic data from the Sloan Digital Sky Survey Data Release 7 based on Kohonen self-organising maps for nearly 105 quasar spectra. The final sample consists of 365 quasars in the redshift range z = 0.6 - 4.2 (z¯ = 1.50 ± 0.45) and includes in particular a subsample of 46 WLQs with equivalent widths WMg iiattention was paid to selection effects. Results: The WLQs have, on average, significantly higher luminosities, Eddington ratios, and accretion rates. About half of the excess comes from a selection bias, but an intrinsic excess remains probably caused primarily by higher accretion rates. The spectral energy distribution shows a bluer continuum at rest-frame wavelengths ≳1500 Å. The variability in the optical and UV is relatively low, even taking the variability-luminosity anti-correlation into account. The percentage of radio detected quasars and of core-dominant radio sources is significantly higher than for the control sample, whereas the mean radio-loudness is lower. Conclusions: The properties of our WLQ sample can be consistently understood assuming that it consists of a mix of quasars at the beginning of a stage of increased accretion activity and of beamed radio-quiet quasars. The higher luminosities and Eddington ratios in combination with a bluer spectral energy distribution can be explained by hotter continua, i.e. higher accretion rates. If quasar activity consists of subphases with different accretion rates, a change towards a higher rate is probably accompanied by an only slow development of the broad line region. The composite WLQ spectrum can be reasonably matched by the

  3. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  4. Slip-Stick Mechanism in Training the Superconducting Magnets in the Large Hadron Collider

    CERN Document Server

    Granieri, P P; Todesco, E

    2011-01-01

    Superconducting magnets can exhibit training quenches during successive powering to reaching nominal performance. The slip–stick motion of the conductors is considered to be one of the mechanisms of training. In this paper, we present a simple quantitative model where the training is described as a discrete dynamical system matching the equilibrium between the energy margin of the superconducting cable and the frictional energy released during the conductor motion. The model can be explicitly solved in the linearized case, showing that the short sample limit is reached via a power law. Training phenomena have a large random component. A large set of data of the large hadron collider magnet tests is postprocessed according to previously defined methods to extract an average training curve for dipoles and quadrupoles. These curves show the asymptotic power law predicted by the model. The curves are then fit through the model, which has two free parameters. The model shows good agreement over a large range, bu...

  5. Slip-Stick Mechanism in Training the Superconducting Magnets in the Large Hadron Collider

    CERN Document Server

    Granieri, P P; Lorin, C

    2011-01-01

    Superconducting magnets can exhibit training quenches during successive powering to reaching nominal performance. The slip-stick motion of the conductors is considered to be one of the mechanisms of training. In this paper, we present a simple quantitative model where the training is described as a discrete dynamical system matching the equilibrium between the energy margin of the superconducting cable and the frictional energy released during the conductor motion. The model can be explicitly solved in the linearized case, showing that the short sample limit is reached via a power law. Training phenomena have a large random component. A large set of data of the large hadron collider magnet tests is postprocessed according to previously defined methods to extract an average training curve for dipoles and quadrupoles. These curves show the asymptotic power law predicted by the model. The curves are then fit through the model, which has two free parameters. The model shows good agreement over a large range, but ...

  6. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  7. Increased body mass index predicts severity of asthma symptoms but not objective asthma traits in a large sample of asthmatics

    DEFF Research Database (Denmark)

    Bildstrup, Line; Backer, Vibeke; Thomsen, Simon Francis

    2015-01-01

    AIM: To examine the relationship between body mass index (BMI) and different indicators of asthma severity in a large community-based sample of Danish adolescents and adults. METHODS: A total of 1186 subjects, 14-44 years of age, who in a screening questionnaire had reported a history of airway...... symptoms suggestive of asthma and/or allergy, or who were taking any medication for these conditions were clinically examined. All participants were interviewed about respiratory symptoms and furthermore height and weight, skin test reactivity, lung function, and airway responsiveness were measured...

  8. Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor

    2012-03-01

    We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.

  9. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  10. Large variation in lipid content, ΣPCB and δ13C within individual Atlantic salmon (Salmo salar)

    International Nuclear Information System (INIS)

    Persson, Maria E.; Larsson, Per; Holmqvist, Niklas; Stenroth, Patrik

    2007-01-01

    Many studies that investigate pollutant levels, or use stable isotope ratios to define trophic level or animal origin, use different standard ways of sampling (dorsal, whole filet or whole body samples). This study shows that lipid content, ΣPCB and δ 13 C display large differences within muscle samples taken from a single Atlantic salmon. Lipid- and PCB-content was lowest in tail muscles, intermediate in anterior-dorsal muscles and highest in the stomach (abdominal) muscle area. Stable isotopes of carbon (δ 13 C) showed a lipid accumulation in the stomach muscle area and a depletion in tail muscles. We conclude that it is important to choose an appropriate sample location within an animal based on what processes are to be studied. Care should be taken when attributing persistent pollutant levels or stable isotope data to specific environmental processes before controlling for within-animal variation in these variables. - Lipid content, ΣPCB and δ 13 C vary to a large extent within Atlantic salmon, therefore, the sample technique for individual fish is of outmost importance for proper interpretation of data

  11. Sample Return Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — This Challenge requires demonstration of an autonomous robotic system to locate and collect a set of specific sample types from a large planetary analog area and...

  12. The Depression Anxiety Stress Scales (DASS): normative data and latent structure in a large non-clinical sample.

    Science.gov (United States)

    Crawford, John R; Henry, Julie D

    2003-06-01

    To provide UK normative data for the Depression Anxiety and Stress Scale (DASS) and test its convergent, discriminant and construct validity. Cross-sectional, correlational and confirmatory factor analysis (CFA). The DASS was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,771) in terms of demographic variables. Competing models of the latent structure of the DASS were derived from theoretical and empirical sources and evaluated using confirmatory factor analysis. Correlational analysis was used to determine the influence of demographic variables on DASS scores. The convergent and discriminant validity of the measure was examined through correlating the measure with two other measures of depression and anxiety (the HADS and the sAD), and a measure of positive and negative affectivity (the PANAS). The best fitting model (CFI =.93) of the latent structure of the DASS consisted of three correlated factors corresponding to the depression, anxiety and stress scales with correlated error permitted between items comprising the DASS subscales. Demographic variables had only very modest influences on DASS scores. The reliability of the DASS was excellent, and the measure possessed adequate convergent and discriminant validity Conclusions: The DASS is a reliable and valid measure of the constructs it was intended to assess. The utility of this measure for UK clinicians is enhanced by the provision of large sample normative data.

  13. Generalized sampling in Julia

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Nielsen, Morten; Rasmussen, Morten Grud

    2017-01-01

    Generalized sampling is a numerically stable framework for obtaining reconstructions of signals in different bases and frames from their samples. For example, one can use wavelet bases for reconstruction given frequency measurements. In this paper, we will introduce a carefully documented toolbox...... for performing generalized sampling in Julia. Julia is a new language for technical computing with focus on performance, which is ideally suited to handle the large size problems often encountered in generalized sampling. The toolbox provides specialized solutions for the setup of Fourier bases and wavelets....... The performance of the toolbox is compared to existing implementations of generalized sampling in MATLAB....

  14. Detecting differential protein expression in large-scale population proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Soyoung; Qian, Weijun; Camp, David G.; Smith, Richard D.; Tompkins, Ronald G.; Davis, Ronald W.; Xiao, Wenzhong

    2014-06-17

    Mass spectrometry-based high-throughput quantitative proteomics shows great potential in clinical biomarker studies, identifying and quantifying thousands of proteins in biological samples. However, methods are needed to appropriately handle issues/challenges unique to mass spectrometry data in order to detect as many biomarker proteins as possible. One issue is that different mass spectrometry experiments generate quite different total numbers of quantified peptides, which can result in more missing peptide abundances in an experiment with a smaller total number of quantified peptides. Another issue is that the quantification of peptides is sometimes absent, especially for less abundant peptides and such missing values contain the information about the peptide abundance. Here, we propose a Significance Analysis for Large-scale Proteomics Studies (SALPS) that handles missing peptide intensity values caused by the two mechanisms mentioned above. Our model has a robust performance in both simulated data and proteomics data from a large clinical study. Because varying patients’ sample qualities and deviating instrument performances are not avoidable for clinical studies performed over the course of several years, we believe that our approach will be useful to analyze large-scale clinical proteomics data.

  15. Health indicators: eliminating bias from convenience sampling estimators.

    Science.gov (United States)

    Hedt, Bethany L; Pagano, Marcello

    2011-02-28

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone. Copyright © 2011 John Wiley & Sons, Ltd.

  16. 'Intelligent' approach to radioimmunoassay sample counting employing a microprocessor controlled sample counter

    International Nuclear Information System (INIS)

    Ekins, R.P.; Sufi, S.; Malan, P.G.

    1977-01-01

    The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore particularly imperitive that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. The majority of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be releted to the counting errors for that sample. It is the objective of this presentation to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5-10 fold to be made. (orig.) [de

  17. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  18. A direct observation method for auditing large urban centers using stratified sampling, mobile GIS technology and virtual environments.

    Science.gov (United States)

    Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth

    2017-02-16

    With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.

  19. Nutritional status and dental caries in a large sample of 4- and 5 ...

    African Journals Online (AJOL)

    Background. Evidence from studies involving small samples of children in Africa, India and South America suggests a higher dental caries rate in malnourished children. A comparison was done to evaluate wasting and stunting and their association with dental caries in four samples of South African children. Design.

  20. Characterization of Large Structural Genetic Mosaicism in Human Autosomes

    Science.gov (United States)

    Machiela, Mitchell J.; Zhou, Weiyin; Sampson, Joshua N.; Dean, Michael C.; Jacobs, Kevin B.; Black, Amanda; Brinton, Louise A.; Chang, I-Shou; Chen, Chu; Chen, Constance; Chen, Kexin; Cook, Linda S.; Crous Bou, Marta; De Vivo, Immaculata; Doherty, Jennifer; Friedenreich, Christine M.; Gaudet, Mia M.; Haiman, Christopher A.; Hankinson, Susan E.; Hartge, Patricia; Henderson, Brian E.; Hong, Yun-Chul; Hosgood, H. Dean; Hsiung, Chao A.; Hu, Wei; Hunter, David J.; Jessop, Lea; Kim, Hee Nam; Kim, Yeul Hong; Kim, Young Tae; Klein, Robert; Kraft, Peter; Lan, Qing; Lin, Dongxin; Liu, Jianjun; Le Marchand, Loic; Liang, Xiaolin; Lissowska, Jolanta; Lu, Lingeng; Magliocco, Anthony M.; Matsuo, Keitaro; Olson, Sara H.; Orlow, Irene; Park, Jae Yong; Pooler, Loreall; Prescott, Jennifer; Rastogi, Radhai; Risch, Harvey A.; Schumacher, Fredrick; Seow, Adeline; Setiawan, Veronica Wendy; Shen, Hongbing; Sheng, Xin; Shin, Min-Ho; Shu, Xiao-Ou; VanDen Berg, David; Wang, Jiu-Cun; Wentzensen, Nicolas; Wong, Maria Pik; Wu, Chen; Wu, Tangchun; Wu, Yi-Long; Xia, Lucy; Yang, Hannah P.; Yang, Pan-Chyr; Zheng, Wei; Zhou, Baosen; Abnet, Christian C.; Albanes, Demetrius; Aldrich, Melinda C.; Amos, Christopher; Amundadottir, Laufey T.; Berndt, Sonja I.; Blot, William J.; Bock, Cathryn H.; Bracci, Paige M.; Burdett, Laurie; Buring, Julie E.; Butler, Mary A.; Carreón, Tania; Chatterjee, Nilanjan; Chung, Charles C.; Cook, Michael B.; Cullen, Michael; Davis, Faith G.; Ding, Ti; Duell, Eric J.; Epstein, Caroline G.; Fan, Jin-Hu; Figueroa, Jonine D.; Fraumeni, Joseph F.; Freedman, Neal D.; Fuchs, Charles S.; Gao, Yu-Tang; Gapstur, Susan M.; Patiño-Garcia, Ana; Garcia-Closas, Montserrat; Gaziano, J. Michael; Giles, Graham G.; Gillanders, Elizabeth M.; Giovannucci, Edward L.; Goldin, Lynn; Goldstein, Alisa M.; Greene, Mark H.; Hallmans, Goran; Harris, Curtis C.; Henriksson, Roger; Holly, Elizabeth A.; Hoover, Robert N.; Hu, Nan; Hutchinson, Amy; Jenab, Mazda; Johansen, Christoffer; Khaw, Kay-Tee; Koh, Woon-Puay; Kolonel, Laurence N.; Kooperberg, Charles; Krogh, Vittorio; Kurtz, Robert C.; LaCroix, Andrea; Landgren, Annelie; Landi, Maria Teresa; Li, Donghui; Liao, Linda M.; Malats, Nuria; McGlynn, Katherine A.; McNeill, Lorna H.; McWilliams, Robert R.; Melin, Beatrice S.; Mirabello, Lisa; Peplonska, Beata; Peters, Ulrike; Petersen, Gloria M.; Prokunina-Olsson, Ludmila; Purdue, Mark; Qiao, You-Lin; Rabe, Kari G.; Rajaraman, Preetha; Real, Francisco X.; Riboli, Elio; Rodríguez-Santiago, Benjamín; Rothman, Nathaniel; Ruder, Avima M.; Savage, Sharon A.; Schwartz, Ann G.; Schwartz, Kendra L.; Sesso, Howard D.; Severi, Gianluca; Silverman, Debra T.; Spitz, Margaret R.; Stevens, Victoria L.; Stolzenberg-Solomon, Rachael; Stram, Daniel; Tang, Ze-Zhong; Taylor, Philip R.; Teras, Lauren R.; Tobias, Geoffrey S.; Viswanathan, Kala; Wacholder, Sholom; Wang, Zhaoming; Weinstein, Stephanie J.; Wheeler, William; White, Emily; Wiencke, John K.; Wolpin, Brian M.; Wu, Xifeng; Wunder, Jay S.; Yu, Kai; Zanetti, Krista A.; Zeleniuch-Jacquotte, Anne; Ziegler, Regina G.; de Andrade, Mariza; Barnes, Kathleen C.; Beaty, Terri H.; Bierut, Laura J.; Desch, Karl C.; Doheny, Kimberly F.; Feenstra, Bjarke; Ginsburg, David; Heit, John A.; Kang, Jae H.; Laurie, Cecilia A.; Li, Jun Z.; Lowe, William L.; Marazita, Mary L.; Melbye, Mads; Mirel, Daniel B.; Murray, Jeffrey C.; Nelson, Sarah C.; Pasquale, Louis R.; Rice, Kenneth; Wiggs, Janey L.; Wise, Anastasia; Tucker, Margaret; Pérez-Jurado, Luis A.; Laurie, Cathy C.; Caporaso, Neil E.; Yeager, Meredith; Chanock, Stephen J.

    2015-01-01

    Analyses of genome-wide association study (GWAS) data have revealed that detectable genetic mosaicism involving large (>2 Mb) structural autosomal alterations occurs in a fraction of individuals. We present results for a set of 24,849 genotyped individuals (total GWAS set II [TGSII]) in whom 341 large autosomal abnormalities were observed in 168 (0.68%) individuals. Merging data from the new TGSII set with data from two prior reports (the Gene-Environment Association Studies and the total GWAS set I) generated a large dataset of 127,179 individuals; we then conducted a meta-analysis to investigate the patterns of detectable autosomal mosaicism (n = 1,315 events in 925 [0.73%] individuals). Restricting to events >2 Mb in size, we observed an increase in event frequency as event size decreased. The combined results underscore that the rate of detectable mosaicism increases with age (p value = 5.5 × 10−31) and is higher in men (p value = 0.002) but lower in participants of African ancestry (p value = 0.003). In a subset of 47 individuals from whom serial samples were collected up to 6 years apart, complex changes were noted over time and showed an overall increase in the proportion of mosaic cells as age increased. Our large combined sample allowed for a unique ability to characterize detectable genetic mosaicism involving large structural events and strengthens the emerging evidence of non-random erosion of the genome in the aging population. PMID:25748358

  1. Energy loss and straggling of MeV ions through biological samples

    International Nuclear Information System (INIS)

    Ma Lei; Wang Yugang; Xue Jianming; Chen Qizhong; Zhang Weiming; Zhang Yanwen

    2007-01-01

    Energy loss and energy straggling of energetic ions through natural dehydrated biological samples were investigated using transmission technique. Biological samples (onion membrane, egg coat, and tomato coat) with different mass thickness were studied, together with Mylar for comparison. The energy loss and energy straggling of MeV H and He ions after penetrating the biological and Mylar samples were measured. The experimental results show that the average energy losses of MeV ions through the biological samples are consistent with SRIM predictions; however, large deviation in energy straggling is observed between the measured results and the SRIM predictions. Taking into account inhomogeneity in mass density and structure of the biological sample, an energy straggling formula is suggested, and the experimental energy straggling values are well predicted by the proposed formula

  2. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    Energy Technology Data Exchange (ETDEWEB)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    2017-12-11

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). We explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.

  3. Evaluation of Respondent-Driven Sampling

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling

  4. Evaluation of respondent-driven sampling.

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required

  5. Genetic Influences on Pulmonary Function: A Large Sample Twin Study

    DEFF Research Database (Denmark)

    Ingebrigtsen, Truls S; Thomsen, Simon F; van der Sluis, Sophie

    2011-01-01

    Heritability of forced expiratory volume in one second (FEV(1)), forced vital capacity (FVC), and peak expiratory flow (PEF) has not been previously addressed in large twin studies. We evaluated the genetic contribution to individual differences observed in FEV(1), FVC, and PEF using data from...... the largest population-based twin study on spirometry. Specially trained lay interviewers with previous experience in spirometric measurements tested 4,314 Danish twins (individuals), 46-68 years of age, in their homes using a hand-held spirometer, and their flow-volume curves were evaluated. Modern variance...

  6. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples

    DEFF Research Database (Denmark)

    Abma, Femke I.; Bültmann, Ute; Amick, Benjamin C.

    2017-01-01

    Objective: The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons’ health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands...

  7. The Work Role Functioning Questionnaire v2.0 Showed Consistent Factor Structure Across Six Working Samples

    NARCIS (Netherlands)

    Abma, F.I.; Bultmann, U.; Amick III, B.C.; Arends, I.; Dorland, P.A.; Flach, P.A.; Klink, J.J.L van der; Ven H.A., van de; Bjørner, J.B.

    2017-01-01

    Objective The Work Role Functioning Questionnaire v2.0 (WRFQ) is an outcome measure linking a persons’ health to the ability to meet work demands in the twenty-first century. We aimed to examine the construct validity of the WRFQ in a heterogeneous set of working samples in the Netherlands with

  8. Large scale gas chromatographic demonstration system for hydrogen isotope separation

    International Nuclear Information System (INIS)

    Cheh, C.H.

    1988-01-01

    A large scale demonstration system was designed for a throughput of 3 mol/day equimolar mixture of H,D, and T. The demonstration system was assembled and an experimental program carried out. This project was funded by Kernforschungszentrum Karlsruhe, Canadian Fusion Fuel Technology Projects and Ontario Hydro Research Division. Several major design innovations were successfully implemented in the demonstration system and are discussed in detail. Many experiments were carried out in the demonstration system to study the performance of the system to separate hydrogen isotopes at high throughput. Various temperature programming schemes were tested, heart-cutting operation was evaluated, and very large (up to 138 NL/injection) samples were separated in the system. The results of the experiments showed that the specially designed column performed well as a chromatographic column and good separation could be achieved even when a 138 NL sample was injected

  9. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  10. The presentation and preliminary validation of KIWEST using a large sample of Norwegian university staff.

    Science.gov (United States)

    Innstrand, Siw Tone; Christensen, Marit; Undebakke, Kirsti Godal; Svarva, Kyrre

    2015-12-01

    The aim of the present paper is to present and validate a Knowledge-Intensive Work Environment Survey Target (KIWEST), a questionnaire developed for assessing the psychosocial factors among people in knowledge-intensive work environments. The construct validity and reliability of the measurement model where tested on a representative sample of 3066 academic and administrative staff working at one of the largest universities in Norway. Confirmatory factor analysis provided initial support for the convergent validity and internal consistency of the 30 construct KIWEST measurement model. However, discriminant validity tests indicated that some of the constructs might overlap to some degree. Overall, the KIWEST measure showed promising psychometric properties as a psychosocial work environment measure. © 2015 the Nordic Societies of Public Health.

  11. Large Country-Lot Quality Assurance Sampling : A New Method for Rapid Monitoring and Evaluation of Health, Nutrition and Population Programs at Sub-National Levels

    OpenAIRE

    Hedt, Bethany L.; Olives, Casey; Pagano, Marcello; Valadez, Joseph J.

    2008-01-01

    Sampling theory facilitates development of economical, effective and rapid measurement of a population. While national policy maker value survey results measuring indicators representative of a large area (a country, state or province), measurement in smaller areas produces information useful for managers at the local level. It is often not possible to disaggregate a national survey to obt...

  12. HAMMER: Reweighting tool for simulated data samples

    CERN Document Server

    Duell, Stephan; Ligeti, Zoltan; Papucci, Michele; Robinson, Dean

    2016-01-01

    Modern flavour physics experiments, such as Belle II or LHCb, require large samples of generated Monte Carlo events. Monte Carlo events often are processed in a sophisticated chain that includes a simulation of the detector response. The generation and reconstruction of large samples is resource-intensive and in principle would need to be repeated if e.g. parameters responsible for the underlying models change due to new measurements or new insights. To avoid having to regenerate large samples, we work on a tool, The Helicity Amplitude Module for Matrix Element Reweighting (HAMMER), which allows one to easily reweight existing events in the context of semileptonic b → q ` ̄ ν ` analyses to new model parameters or new physics scenarios.

  13. Solid phase extraction of large volume of water and beverage samples to improve detection limits for GC-MS analysis of bisphenol A and four other bisphenols.

    Science.gov (United States)

    Cao, Xu-Liang; Popovic, Svetlana

    2018-01-01

    Solid phase extraction (SPE) of large volumes of water and beverage products was investigated for the GC-MS analysis of bisphenol A (BPA), bisphenol AF (BPAF), bisphenol F (BPF), bisphenol E (BPE), and bisphenol B (BPB). While absolute recoveries of the method were improved for water and some beverage products (e.g. diet cola, iced tea), breakthrough may also have occurred during SPE of 200 mL of other beverages (e.g. BPF in cola). Improvements in method detection limits were observed with the analysis of large sample volumes for all bisphenols at ppt (pg/g) to sub-ppt levels. This improvement was found to be proportional to sample volumes for water and beverage products with less interferences and noise levels around the analytes. Matrix effects and interferences were observed during SPE of larger volumes (100 and 200 mL) of the beverage products, and affected the accurate analysis of BPF. This improved method was used to analyse bisphenols in various beverage samples, and only BPA was detected, with levels ranging from 0.022 to 0.030 ng/g for products in PET bottles, and 0.085 to 0.32 ng/g for products in cans.

  14. Exposure to teasing on popular television shows and associations with adolescent body satisfaction.

    Science.gov (United States)

    Eisenberg, Marla E; Ward, Ellen; Linde, Jennifer A; Gollust, Sarah E; Neumark-Sztainer, Dianne

    2017-12-01

    This study uses a novel mixed methods design to examine the relationship between incidents of teasing in popular television shows and body satisfaction of adolescent viewers. Survey data were used to identify 25 favorite television shows in a large population-based sample of Minnesota adolescents (N=2793, age=14.4years). Data from content analysis of teasing incidents in popular shows were linked to adolescent survey data. Linear regression models examined associations between exposure to on-screen teasing in adolescents' own favorite shows and their body satisfaction. Effect modification by adolescent weight status was tested using interaction terms. Teasing on TV was common, with 3.3 incidents per episode; over one-quarter of teasing was weight/shape-related. Exposure to weight/shape-related teasing (β=-0.43, p=0.008) and teasing with overweight targets (β=-0.03, p=0.02) was inversely associated with girls' body satisfaction; no associations were found for boys. Findings were similar regardless of the adolescent viewer's weight status. Families, health care providers, media literacy programs and the entertainment industry are encouraged to consider the negative effects exposure to weight stigmatization can have on adolescent girls. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Sampling methods and non-destructive examination techniques for large radioactive waste packages

    International Nuclear Information System (INIS)

    Green, T.H.; Smith, D.L.; Burgoyne, K.E.; Maxwell, D.J.; Norris, G.H.; Billington, D.M.; Pipe, R.G.; Smith, J.E.; Inman, C.M.

    1992-01-01

    Progress is reported on work undertaken to evaluate quality checking methods for radioactive wastes. A sampling rig was designed, fabricated and used to develop techniques for the destructive sampling of cemented simulant waste using remotely operated equipment. An engineered system for the containment of cooling water was designed and manufactured and successfully demonstrated with the drum and coring equipment mounted in both vertical and horizontal orientations. The preferred in-cell orientation was found to be with the drum and coring machinery mounted in a horizontal position. Small powdered samples can be taken from cemented homogeneous waste cores using a hollow drill/vacuum section technique with the preferred subsampling technique being to discard the outer 10 mm layer to obtain a representative sample of the cement core. Cement blends can be dissolved using fusion techniques and the resulting solutions are stable to gelling for periods in excess of one year. Although hydrochloric acid and nitric acid are promising solvents for dissolution of cement blends, the resultant solutions tend to form silicic acid gels. An estimate of the beta-emitter content of cemented waste packages can be obtained by a combination of non-destructive and destructive techniques. The errors will probably be in excess of +/-60 % at the 95 % confidence level. Real-time X-ray video-imaging techniques have been used to analyse drums of uncompressed, hand-compressed, in-drum compacted and high-force compacted (i.e. supercompacted) simulant waste. The results have confirmed the applicability of this technique for NDT of low-level waste. 8 refs., 12 figs., 3 tabs

  16. Large-sample neutron activation analysis in mass balance and nutritional studies

    NARCIS (Netherlands)

    van de Wiel, A.; Blaauw, Menno

    2018-01-01

    Low concentrations of elements in food can be measured with various techniques, mostly in small samples (mg). These techniques provide only reliable data when the element is distributed homogeneously in the material to be analysed either naturally or after a homogenisation procedure. When this is

  17. Evaluating hypotheses in geolocation on a very large sample of Twitter

    DEFF Research Database (Denmark)

    Salehi, Bahar; Søgaard, Anders

    2017-01-01

    Recent work in geolocation has madeseveral hypotheses about what linguisticmarkers are relevant to detect where peoplewrite from. In this paper, we examinesix hypotheses against a corpus consistingof all geo-tagged tweets from theUS, or whose geo-tags could be inferred,in a 19% sample of Twitter...

  18. Reinforced dynamics for enhanced sampling in large atomic and molecular systems

    Science.gov (United States)

    Zhang, Linfeng; Wang, Han; E, Weinan

    2018-03-01

    A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.

  19. Fluid sample collection and distribution system. [qualitative analysis of aqueous samples from several points

    Science.gov (United States)

    Brooks, R. L. (Inventor)

    1979-01-01

    A multipoint fluid sample collection and distribution system is provided wherein the sample inputs are made through one or more of a number of sampling valves to a progressive cavity pump which is not susceptible to damage by large unfiltered particles. The pump output is through a filter unit that can provide a filtered multipoint sample. An unfiltered multipoint sample is also provided. An effluent sample can be taken and applied to a second progressive cavity pump for pumping to a filter unit that can provide one or more filtered effluent samples. The second pump can also provide an unfiltered effluent sample. Means are provided to periodically back flush each filter unit without shutting off the whole system.

  20. Predictive Value of Callous-Unemotional Traits in a Large Community Sample

    Science.gov (United States)

    Moran, Paul; Rowe, Richard; Flach, Clare; Briskman, Jacqueline; Ford, Tamsin; Maughan, Barbara; Scott, Stephen; Goodman, Robert

    2009-01-01

    Objective: Callous-unemotional (CU) traits in children and adolescents are increasingly recognized as a distinctive dimension of prognostic importance in clinical samples. Nevertheless, comparatively little is known about the longitudinal effects of these personality traits on the mental health of young people from the general population. Using a…

  1. Comparative Analysis of Clinical Samples Showing Weak Serum Reaction on AutoVue System Causing ABO Blood Typing Discrepancies

    OpenAIRE

    Jo, Su Yeon; Lee, Ju Mi; Kim, Hye Lim; Sin, Kyeong Hwa; Lee, Hyeon Ji; Chang, Chulhun Ludgerus; Kim, Hyung-Hoi

    2016-01-01

    Background ABO blood typing in pre-transfusion testing is a major component of the high workload in blood banks that therefore requires automation. We often experienced discrepant results from an automated system, especially weak serum reactions. We evaluated the discrepant results by the reference manual method to confirm ABO blood typing. Methods In total, 13,113 blood samples were tested with the AutoVue system; all samples were run in parallel with the reference manual method according to...

  2. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  3. Treatability studies on different refinery wastewater samples using high-throughput microbial electrolysis cells (MECs)

    KAUST Repository

    Ren, Lijiao; Siegert, Michael; Ivanov, Ivan; Pisciotta, John M.; Logan, Bruce E.

    2013-01-01

    High-throughput microbial electrolysis cells (MECs) were used to perform treatability studies on many different refinery wastewater samples all having appreciably different characteristics, which resulted in large differences in current generation. A de-oiled refinery wastewater sample from one site (DOW1) produced the best results, with 2.1±0.2A/m2 (maximum current density), 79% chemical oxygen demand removal, and 82% headspace biological oxygen demand removal. These results were similar to those obtained using domestic wastewater. Two other de-oiled refinery wastewater samples also showed good performance, with a de-oiled oily sewer sample producing less current. A stabilization lagoon sample and a stripped sour wastewater sample failed to produce appreciable current. Electricity production, organics removal, and startup time were improved when the anode was first acclimated to domestic wastewater. These results show mini-MECs are an effective method for evaluating treatability of different wastewaters. © 2013 Elsevier Ltd.

  4. Treatability studies on different refinery wastewater samples using high-throughput microbial electrolysis cells (MECs)

    KAUST Repository

    Ren, Lijiao

    2013-05-01

    High-throughput microbial electrolysis cells (MECs) were used to perform treatability studies on many different refinery wastewater samples all having appreciably different characteristics, which resulted in large differences in current generation. A de-oiled refinery wastewater sample from one site (DOW1) produced the best results, with 2.1±0.2A/m2 (maximum current density), 79% chemical oxygen demand removal, and 82% headspace biological oxygen demand removal. These results were similar to those obtained using domestic wastewater. Two other de-oiled refinery wastewater samples also showed good performance, with a de-oiled oily sewer sample producing less current. A stabilization lagoon sample and a stripped sour wastewater sample failed to produce appreciable current. Electricity production, organics removal, and startup time were improved when the anode was first acclimated to domestic wastewater. These results show mini-MECs are an effective method for evaluating treatability of different wastewaters. © 2013 Elsevier Ltd.

  5. Failure analysis of burst tested fuel tube samples

    International Nuclear Information System (INIS)

    Padmaprabu, C.; Ramana Rao, S.V.; Srivatsava, R.K.

    2005-01-01

    The Total Circumferential Elongation (TCE) is an important parameter for evaluation of ductility of the Zircaloy-4 fuel tubes for the PHWR reactors. The TCE values of the fuel tubes were obtained using the burst testing technique. In some lots there is a variation in the values of the TCE. To investigate the reasons for such a large variation in the TCE, samples were selected at appropriate intervals and sectioned at the fractured portion. The surface morphology of the fractured surfaces was examined under Scanning Electron Microscope (SEM) equipped with Energy Dispersive Spectrometer (EDS). The morphologies show segregation of elements at specific locations. Energy dispersive spectra was obtained from those segregated particles. According to the magnitude of TCE value the samples were classified into low, intermediate and high ductility. Low ductility samples were found to contain large amount of segregations along the thickness direction of the tube. This forms a brittle region and a path for the easy crack growth along thickness direction. In the case of intermediate samples the segregation occurred in fewer locations compared to low ductile samples and also confined to the circumferential direction of the outside surface of the tube. Due to this, probability of crack formation at the surface of the tube could be high. But crack growth would be slower in the ductile matrix along the thickness direction resulting in the enhancement of TCE value compared to the low ductile sample. In the high ductile samples, the segregations were very scarce and found to be isolated and embedded in the ductile matrix. The mode of failure in these types of samples was found to be purely ductile. Cracks were found to originate solely from the micro voids in the material. As the probability of crack formation and its propagation is low, very high TCE values were observed in these samples. Microstructural observations of fractured surfaces and EDAX analysis was able to identify the

  6. Methods of pre-concentration of radionuclides from large volume samples

    International Nuclear Information System (INIS)

    Olahova, K.; Matel, L.; Rosskopfova, O.

    2006-01-01

    The development of radioanalytical methods for low level radionuclides in environmental samples is presented. In particular, emphasis is placed on the introduction of extraction chromatography as a tool for improving the quality of results as well as reducing the analysis time. However, the advantageous application of extraction chromatography often depends on the effective use of suitable preconcentration techniques, such as co-precipitation, to reduce the amount of matrix components which accompany the analysis interest. On-going investigations in this field relevant to the determination of environmental levels of actinides and 90 Sr are discussed. (authors)

  7. Evaluation of Sampling Methods for Bacillus Spore ...

    Science.gov (United States)

    Journal Article Following a wide area release of biological materials, mapping the extent of contamination is essential for orderly response and decontamination operations. HVAC filters process large volumes of air and therefore collect highly representative particulate samples in buildings. HVAC filter extraction may have great utility in rapidly estimating the extent of building contamination following a large-scale incident. However, until now, no studies have been conducted comparing the two most appropriate sampling approaches for HVAC filter materials: direct extraction and vacuum-based sampling.

  8. Biases in the OSSOS Detection of Large Semimajor Axis Trans-Neptunian Objects

    Science.gov (United States)

    Gladman, Brett; Shankman, Cory; OSSOS Collaboration

    2017-10-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada-France-Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  9. Association between subjective actual sleep duration, subjective sleep need, age, body mass index, and gender in a large sample of young adults.

    Science.gov (United States)

    Kalak, Nadeem; Brand, Serge; Beck, Johannes; Holsboer-Trachsler, Edith; Wollmer, M Axel

    2015-01-01

    Poor sleep is a major health concern, and there is evidence that young adults are at increased risk of suffering from poor sleep. There is also evidence that sleep duration can vary as a function of gender and body mass index (BMI). We sought to replicate these findings in a large sample of young adults, and also tested the hypothesis that a smaller gap between subjective sleep duration and subjective sleep need is associated with a greater feeling of being restored. A total of 2,929 university students (mean age 23.24±3.13 years, 69.1% female) took part in an Internet-based survey. They answered questions related to demographics and subjective sleep patterns. We found no gender differences in subjective sleep duration, subjective sleep need, BMI, age, or feeling of being restored. Nonlinear associations were observed between subjective sleep duration, BMI, and feeling of being restored. Moreover, a larger discrepancy between subjective actual sleep duration and subjective sleep need was associated with a lower feeling of being restored. The present pattern of results from a large sample of young adults suggests that males and females do not differ with respect to subjective sleep duration, BMI, or feeling of being restored. Moreover, nonlinear correlations seemed to provide a more accurate reflection of the relationship between subjective sleep and demographic variables.

  10. The suicidality continuum in a large sample of obsessive-compulsive disorder (OCD) patients.

    Science.gov (United States)

    Velloso, P; Piccinato, C; Ferrão, Y; Aliende Perin, E; Cesar, R; Fontenelle, L; Hounie, A G; do Rosário, M C

    2016-10-01

    Obsessive-compulsive disorder (OCD) has a chronic course leading to huge impact in the patient's functioning. Suicidal thoughts and attempts are much more frequent in OCD subjects than once thought before. To empirically investigate whether the suicidal phenomena could be analyzed as a suicidality severity continuum and its association with obsessive-compulsive (OC) symptom dimensions and quality of life (QoL), in a large OCD sample. Cross-sectional study with 548 patients diagnosed with OCD according to the DSM-IV criteria, interviewed in the Brazilian OCD Consortium (C-TOC) sites. Patients were evaluated by OCD experts using standardized instruments including: Yale-Brown Obsessive-Compulsive Scale (YBOCS); Dimensional Yale-Brown Obsessive-Compulsive Scale (DYBOCS); Beck Depression and Anxiety Inventories; Structured Clinical Interview for DSM-IV (SCID); and the SF-36 QoL Health Survey. There were extremely high correlations between all the suicidal phenomena. OCD patients with suicidality had significantly lower QoL, higher severity in the "sexual/religious", "aggression" and "symmetry/ordering" OC symptom dimensions, higher BDI and BA scores and a higher frequency of suicide attempts in a family member. In the regression analysis, the factors that most impacted suicidality were the sexual dimension severity, the SF-36 QoL Mental Health domain, the severity of depressive symptoms and a relative with an attempted suicide history. Suicidality could be analyzed as a severity continuum and patients should be carefully monitored since they present with suicidal ideation. Lower QoL scores, higher scores on the sexual dimension and a family history of suicide attempts should be considered as risk factors for suicidality among OCD patients. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  11. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  12. An 'intelligent' approach to radioimmunoassay sample counting employing a microprocessor-controlled sample counter

    International Nuclear Information System (INIS)

    Ekins, R.P.; Sufi, S.; Malan, P.G.

    1978-01-01

    The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore imperative that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. Most of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be related to the counting errors for that sample. The objective of the paper is to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5- to 10-fold to be made. (author)

  13. ROTATION–ACTIVITY CORRELATIONS IN K AND M DWARFS. I. STELLAR PARAMETERS AND COMPILATIONS OF v sin i AND P /sin i FOR A LARGE SAMPLE OF LATE-K AND M DWARFS

    International Nuclear Information System (INIS)

    Houdebine, E. R.; Mullan, D. J.; Paletou, F.; Gebran, M.

    2016-01-01

    The reliable determination of rotation–activity correlations (RACs) depends on precise measurements of the following stellar parameters: T eff , parallax, radius, metallicity, and rotational speed v sin i . In this paper, our goal is to focus on the determination of these parameters for a sample of K and M dwarfs. In a future paper (Paper II), we will combine our rotational data with activity data in order to construct RACs. Here, we report on a determination of effective temperatures based on the ( R – I ) C color from the calibrations of Mann et al. and Kenyon and Hartmann for four samples of late-K, dM2, dM3, and dM4 stars. We also determine stellar parameters ( T eff , log( g ), and [M/H]) using the principal component analysis–based inversion technique for a sample of 105 late-K dwarfs. We compile all effective temperatures from the literature for this sample. We determine empirical radius–[M/H] correlations in our stellar samples. This allows us to propose new effective temperatures, stellar radii, and metallicities for a large sample of 612 late-K and M dwarfs. Our mean radii agree well with those of Boyajian et al. We analyze HARPS and SOPHIE spectra of 105 late-K dwarfs, and we have detected v sin i in 92 stars. In combination with our previous v sin i measurements in M and K dwarfs, we now derive P /sin i measures for a sample of 418 K and M dwarfs. We investigate the distributions of P /sin i , and we show that they are different from one spectral subtype to another at a 99.9% confidence level.

  14. TAGGING THE CHEMICAL EVOLUTION HISTORY OF THE LARGE MAGELLANIC CLOUD DISK

    International Nuclear Information System (INIS)

    Lapenna, Emilio; Mucciarelli, Alessio; Ferraro, Francesco R.; Origlia, Livia

    2012-01-01

    We have used high-resolution spectra obtained with the multifiber facility FLAMES at the Very Large Telescope of the European Southern Observatory to derive kinematic properties and chemical abundances of Fe, O, Mg, and Si for 89 stars in the disk of the Large Magellanic Cloud (LMC). The derived metallicity and [α/Fe], obtained as the average of O, Mg, and Si abundances, allow us to draw a preliminary scheme of the star formation history of this region of the LMC. The derived metallicity distribution shows two main components: one component (comprising ∼84% of the sample) peaks at [Fe/H] = –0.48 dex and it shows an [α/Fe] ratio slightly under solar ([α/Fe] ∼ –0.1 dex). This population probably originated in the main star formation event that occurred 3-4 Gyr ago (possibly triggered by tidal capture of the Small Magellanic Cloud). The other component (comprising ∼16% of the sample) peaks at [Fe/H] ∼ –0 dex and it shows an [α/Fe] ∼0.2 dex. This population was probably generated during the long quiescent epoch of star formation between the first episode and the most recent bursts. Indeed, in our sample we do not find stars with chemical properties similar to the old LMC globular clusters nor to the iron-rich and α-poor stars recently found in the LMC globular cluster NGC 1718 and also predicted to be in the LMC field, thus suggesting that both of these components are small (<1%) in the LMC disk population.

  15. Angular momentum-large-scale structure alignments in ΛCDM models and the SDSS

    Science.gov (United States)

    Paz, Dante J.; Stasyszyn, Federico; Padilla, Nelson D.

    2008-09-01

    We study the alignments between the angular momentum of individual objects and the large-scale structure in cosmological numerical simulations and real data from the Sloan Digital Sky Survey, Data Release 6 (SDSS-DR6). To this end, we measure anisotropies in the two point cross-correlation function around simulated haloes and observed galaxies, studying separately the one- and two-halo regimes. The alignment of the angular momentum of dark-matter haloes in Λ cold dark matter (ΛCDM) simulations is found to be dependent on scale and halo mass. At large distances (two-halo regime), the spins of high-mass haloes are preferentially oriented in the direction perpendicular to the distribution of matter; lower mass systems show a weaker trend that may even reverse to show an angular momentum in the plane of the matter distribution. In the one-halo term regime, the angular momentum is aligned in the direction perpendicular to the matter distribution; the effect is stronger than for the one-halo term and increases for higher mass systems. On the observational side, we focus our study on galaxies in the SDSS-DR6 with elongated apparent shapes, and study alignments with respect to the major semi-axis. We study five samples of edge-on galaxies; the full SDSS-DR6 edge-on sample, bright galaxies, faint galaxies, red galaxies and blue galaxies (the latter two consisting mainly of ellipticals and spirals, respectively). Using the two-halo term of the projected correlation function, we find an excess of structure in the direction of the major semi-axis for all samples; the red sample shows the highest alignment (2.7 +/- 0.8per cent) and indicates that the angular momentum of flattened spheroidals tends to be perpendicular to the large-scale structure. These results are in qualitative agreement with the numerical simulation results indicating that the angular momentum of galaxies could be built up as in the Tidal Torque scenario. The one-halo term only shows a significant alignment

  16. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  17. Large area gridded ionisation chamber and electrostatic precipitator and their application to low-level alpha-spectrometry of environmental air samples

    International Nuclear Information System (INIS)

    Hoetzl, H.; Winkler, R.

    1977-01-01

    A high-resolution, parallel plate Frisch grid ionization chamber with an efficient area of 3000 cm 2 , and a large area electrostatic precipitator were developed and applied to direct alpha spectrometry of air dust. Using an argon-methane mixture (P-10 gas) at atmospheric pressure the resolution of the detector system is 22 keV FWHM at 5 MeV. After sampling for one week and decay of short-lived natural activity, the sensitivity of the procedure for long-lived alpha emitters is about 0.1 fCi/m 3 taking 3 Σσ of background as the detection limit with 1000 min counting time. (author)

  18. Interstellar extinction in the Large Magellanic Cloud

    International Nuclear Information System (INIS)

    Nandy, K.; Morgan, D.H.; Willis, A.J.; Wilson, R.; Gondhalekar, P.M.

    1981-01-01

    A systematic investigation of interstellar extinction in the ultraviolet as a function of position in the Large Magellanic Cloud has been made from an enlarged sample of reddened and comparison stars distributed throughout the cloud. Except for one star SK-69-108, the most reddened star of our sample, the shape of the extinction curves for the LMC stars do not show significant variations. All curves show an increase in extinction towards 2200 A, but some have maxima near 2200 A, some near 1900 A. It has been shown that the feature of the extinction curve near 1900 A is caused by the mismatch of the stellar F III 1920 A feature. The strength of this 1920 A feature as a function of luminosity and spectral type has been determined. The extinction curves have been corrected for the mismatch of the 1920 feature and a single mean extinction curve for the LMC normalized to Asub(V) = 0 and Esub(B-V) = 1 is presented. For the same value of Esub(B-V) the LMC stars show the 2200 A feature weaker by a factor 2 as compared with the galactic stars. Higher extinction shortward of 2000 A in the LMC extinction curves than that in our Galaxy, as reported in earlier papers, is confirmed. (author)

  19. Peroral endoscopic myotomy can improve esophageal motility in patients with achalasia from a large sample self-control research (66 patients.

    Directory of Open Access Journals (Sweden)

    Shuangzhe Yao

    Full Text Available Peroral endoscopic myotomy (POEM as a new approach to achalasia attracts broad attention. The primary objective of this study was to evaluate the results with esophageal motility after POEM through the first large sample clinical research.We have a self-control research with all patients (205 in total who underwent POEM from 2010 to 2014 at our Digestive Endoscopic Center, 66 patients of which underwent high resolution manometry (HRM before and after POEM in our motility laboratory. Follow-ups last for 5.6 months on average. Outcome variables analyzed included upper esophageal sphincter pressure (UESP, upper esophageal sphincter residual pressure (UESRP, lower esophageal sphincter pressure (LESP, lower esophageal sphincter residual pressure (LESRP and esophageal body peristalsis. We have a statistical analysis to illustrate how POEM impacts on the change of esophageal motility.The symptoms related to dysphagia were relieved in 95% of patients in recent term after POEM. While HRM showed a statistically significant reduction of URSRP, LESP and LESRP (P0.05 did not occur for these two groups on LESP and LESRP reduction.POEM clearly relieved the symptoms related to dysphagia by lowering the pressure of upper esophageal sphincter (UES and lower esophageal sphincter (LES,and other endoscopic treatment before POEM did not affect the improvement of LES pressure. These results are concluded from our short-term follow-up study, while the long-term efficacy remains to be further illustrated.Chinese Clinical Trial Register ChiCTR-TRC-12002204.

  20. Discriminant WSRC for Large-Scale Plant Species Recognition

    Directory of Open Access Journals (Sweden)

    Shanwen Zhang

    2017-01-01

    Full Text Available In sparse representation based classification (SRC and weighted SRC (WSRC, it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.

  1. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  2. Rotation-Activity Correlations in K and M Dwarfs. I. Stellar Parameters and Compilations of v sin I and P/sin I for a Large Sample of Late-K and M Dwarfs

    Science.gov (United States)

    Houdebine, E. R.; Mullan, D. J.; Paletou, F.; Gebran, M.

    2016-05-01

    The reliable determination of rotation-activity correlations (RACs) depends on precise measurements of the following stellar parameters: T eff, parallax, radius, metallicity, and rotational speed v sin I. In this paper, our goal is to focus on the determination of these parameters for a sample of K and M dwarfs. In a future paper (Paper II), we will combine our rotational data with activity data in order to construct RACs. Here, we report on a determination of effective temperatures based on the (R-I) C color from the calibrations of Mann et al. and Kenyon & Hartmann for four samples of late-K, dM2, dM3, and dM4 stars. We also determine stellar parameters (T eff, log(g), and [M/H]) using the principal component analysis-based inversion technique for a sample of 105 late-K dwarfs. We compile all effective temperatures from the literature for this sample. We determine empirical radius-[M/H] correlations in our stellar samples. This allows us to propose new effective temperatures, stellar radii, and metallicities for a large sample of 612 late-K and M dwarfs. Our mean radii agree well with those of Boyajian et al. We analyze HARPS and SOPHIE spectra of 105 late-K dwarfs, and we have detected v sin I in 92 stars. In combination with our previous v sin I measurements in M and K dwarfs, we now derive P/sin I measures for a sample of 418 K and M dwarfs. We investigate the distributions of P/sin I, and we show that they are different from one spectral subtype to another at a 99.9% confidence level. Based on observations available at Observatoire de Haute Provence and the European Southern Observatory databases and on Hipparcos parallax measurements.

  3. ROTATION–ACTIVITY CORRELATIONS IN K AND M DWARFS. I. STELLAR PARAMETERS AND COMPILATIONS OF v sin i AND P /sin i FOR A LARGE SAMPLE OF LATE-K AND M DWARFS

    Energy Technology Data Exchange (ETDEWEB)

    Houdebine, E. R. [Armagh Observatory, College Hill, BT61 9DG Armagh, Northern Ireland (United Kingdom); Mullan, D. J. [Department of Physics and Astronomy, University of Delaware, Newark, DE 19716 (United States); Paletou, F. [Université de Toulouse, UPS-Observatoire Midi-Pyrénées, IRAP, Toulouse (France); Gebran, M., E-mail: eric_houdebine@yahoo.fr [Department of Physics and Astronomy, Notre Dame University-Louaize, P.O. Box 72, Zouk Mikael (Lebanon)

    2016-05-10

    The reliable determination of rotation–activity correlations (RACs) depends on precise measurements of the following stellar parameters: T {sub eff}, parallax, radius, metallicity, and rotational speed v sin i . In this paper, our goal is to focus on the determination of these parameters for a sample of K and M dwarfs. In a future paper (Paper II), we will combine our rotational data with activity data in order to construct RACs. Here, we report on a determination of effective temperatures based on the ( R – I ){sub C} color from the calibrations of Mann et al. and Kenyon and Hartmann for four samples of late-K, dM2, dM3, and dM4 stars. We also determine stellar parameters ( T {sub eff}, log( g ), and [M/H]) using the principal component analysis–based inversion technique for a sample of 105 late-K dwarfs. We compile all effective temperatures from the literature for this sample. We determine empirical radius–[M/H] correlations in our stellar samples. This allows us to propose new effective temperatures, stellar radii, and metallicities for a large sample of 612 late-K and M dwarfs. Our mean radii agree well with those of Boyajian et al. We analyze HARPS and SOPHIE spectra of 105 late-K dwarfs, and we have detected v sin i in 92 stars. In combination with our previous v sin i measurements in M and K dwarfs, we now derive P /sin i measures for a sample of 418 K and M dwarfs. We investigate the distributions of P /sin i , and we show that they are different from one spectral subtype to another at a 99.9% confidence level.

  4. Polygenic scores predict alcohol problems in an independent sample and show moderation by the environment.

    Science.gov (United States)

    Salvatore, Jessica E; Aliev, Fazil; Edwards, Alexis C; Evans, David M; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J; Kaprio, Jaakko; Dick, Danielle M

    2014-04-10

    Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems-derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)-predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07-0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.

  5. Polygenic Scores Predict Alcohol Problems in an Independent Sample and Show Moderation by the Environment

    Directory of Open Access Journals (Sweden)

    Jessica E. Salvatore

    2014-04-01

    Full Text Available Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems—derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female—predicted alcohol problems earlier in development (age 14 in an independent sample (FinnTwin12; n = 1162; 53% female. We then tested whether environmental factors (parental knowledge and peer deviance moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07–0.08, all p-values ≤ 0.01. Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b, p-values (p, and percent of variance (R2 accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively. Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.

  6. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  7. Malaria diagnosis from pooled blood samples: comparative analysis of real-time PCR, nested PCR and immunoassay as a platform for the molecular and serological diagnosis of malaria on a large-scale

    Directory of Open Access Journals (Sweden)

    Giselle FMC Lima

    2011-09-01

    Full Text Available Malaria diagnoses has traditionally been made using thick blood smears, but more sensitive and faster techniques are required to process large numbers of samples in clinical and epidemiological studies and in blood donor screening. Here, we evaluated molecular and serological tools to build a screening platform for pooled samples aimed at reducing both the time and the cost of these diagnoses. Positive and negative samples were analysed in individual and pooled experiments using real-time polymerase chain reaction (PCR, nested PCR and an immunochromatographic test. For the individual tests, 46/49 samples were positive by real-time PCR, 46/49 were positive by nested PCR and 32/46 were positive by immunochromatographic test. For the assays performed using pooled samples, 13/15 samples were positive by real-time PCR and nested PCR and 11/15 were positive by immunochromatographic test. These molecular methods demonstrated sensitivity and specificity for both the individual and pooled samples. Due to the advantages of the real-time PCR, such as the fast processing and the closed system, this method should be indicated as the first choice for use in large-scale diagnosis and the nested PCR should be used for species differentiation. However, additional field isolates should be tested to confirm the results achieved using cultured parasites and the serological test should only be adopted as a complementary method for malaria diagnosis.

  8. Determination method for 129I in soil samples by MIP-MS

    International Nuclear Information System (INIS)

    Uezu, Yasuhiro; Nakano, Masanao; Fujita, Hiroki; Watanabe, Hitoshi; Maruo, Yoshihiro

    2001-01-01

    The radioactive iodine-129 ( 129 I) is an important radionuclide for environmental assessment because it has a long half-life and is accumulated in the thyroid gland in humans. A new analytical technique by Microwave Induced Plasma Mass Spectrometer (MIP-MS) was applied to the determination of 129 I in soil samples. In environmental samples, a large amount of matrix elements are present. Therefore, the matrix elements were eliminated by ashing at 1000degC, and iodine isotopes were trapped by an activated charcoal and finally extracted by 10% tetramethylammonium hydroxide (TMAH). The concentration of 129 I in a soil samples were compared between results of neutron activation analysis and MIP-MS method. The results showed an excellent agreement. (author)

  9. Evaluation of Inflammatory Markers in a Large Sample of Obstructive Sleep Apnea Patients without Comorbidities

    Directory of Open Access Journals (Sweden)

    Izolde Bouloukaki

    2017-01-01

    Full Text Available Systemic inflammation is important in obstructive sleep apnea (OSA pathophysiology and its comorbidity. We aimed to assess the levels of inflammatory biomarkers in a large sample of OSA patients and to investigate any correlation between these biomarkers with clinical and polysomnographic (PSG parameters. This was a cross-sectional study in which 2983 patients who had undergone a polysomnography for OSA diagnosis were recruited. Patients with known comorbidities were excluded. Included patients (n=1053 were grouped according to apnea-hypopnea index (AHI as mild, moderate, and severe. Patients with AHI < 5 served as controls. Demographics, PSG data, and levels of high-sensitivity C-reactive protein (hs-CRP, fibrinogen, erythrocyte sedimentation rate (ESR, and uric acid (UA were measured and compared between groups. A significant difference was found between groups in hs-CRP, fibrinogen, and UA. All biomarkers were independently associated with OSA severity and gender (p<0.05. Females had increased levels of hs-CRP, fibrinogen, and ESR (p<0.001 compared to men. In contrast, UA levels were higher in men (p<0.001. Our results suggest that inflammatory markers significantly increase in patients with OSA without known comorbidities and correlate with OSA severity. These findings may have important implications regarding OSA diagnosis, monitoring, treatment, and prognosis. This trial is registered with ClinicalTrials.gov number NCT03070769.

  10. Testing of Large-Scale ICV Glasses with Hanford LAW Simulant

    Energy Technology Data Exchange (ETDEWEB)

    Hrma, Pavel R.; Kim, Dong-Sang; Vienna, John D.; Matyas, Josef; Smith, Donald E.; Schweiger, Michael J.; Yeager, John D.

    2005-03-01

    Preliminary glass compositions for immobilizing Hanford low-activity waste (LAW) by the in-container vitrification (ICV) process were initially fabricated at crucible- and engineering-scale, including simulants and actual (radioactive) LAW. Glasses were characterized for vapor hydration test (VHT) and product consistency test (PCT) responses and crystallinity (both quenched and slow-cooled samples). Selected glasses were tested for toxicity characteristic leach procedure (TCLP) responses, viscosity, and electrical conductivity. This testing showed that glasses with LAW loading of 20 mass% can be made readily and meet all product constraints by a far margin. Glasses with over 22 mass% Na2O can be made to meet all other product quality and process constraints. Large-scale testing was performed at the AMEC, Geomelt Division facility in Richland. Three tests were conducted using simulated LAW with increasing loadings of 12, 17, and 20 mass% Na2O. Glass samples were taken from the test products in a manner to represent the full expected range of product performance. These samples were characterized for composition, density, crystalline and non-crystalline phase assemblage, and durability using the VHT, PCT, and TCLP tests. The results, presented in this report, show that the AMEC ICV product with meets all waste form requirements with a large margin. These results provide strong evidence that the Hanford LAW can be successfully vitrified by the ICV technology and can meet all the constraints related to product quality. The economic feasibility of the ICV technology can be further enhanced by subsequent optimization.

  11. A cross-sectional study of Tritrichomonas foetus infection among healthy cats at shows in Norway

    Directory of Open Access Journals (Sweden)

    Nødtvedt Ane

    2011-06-01

    Full Text Available Abstract Background In recent years, the protozoan Tritrichomonas foetus has been recognised as an important cause of chronic large-bowel diarrhoea in purebred cats in many countries, including Norway. The aim of this cross-sectional study was to determine the proportion of animals with T. foetus infection among clinically healthy cats in Norway and to assess different risk factors for T. foetus infection, such as age, sex, former history of gastrointestinal symptoms and concurrent infections with Giardia duodenalis and Cryptosporidium sp. Methods The sample population consisted of 52 cats participating in three cat shows in Norway in 2009. Samples were examined for motile T. foetus by microscopy, after culturing and for T. foetus-DNA by species-specific nested PCR, as well as for Giardia cysts and Cryptosporidium oocysts by immunofluorescent antibody test (IFAT. Results By PCR, T. foetus-DNA was demonstrated in the faeces of 11 (21% of the 52 cats tested. DNA-sequencing of five positive samples yielded 100% identity with previous isolates of T. foetus from cats. Only one sample was positive for T. foetus by microscopy. By IFAT, four samples were positive for Giardia cysts and one for Cryptosporidium oocysts, none of which was co-infected with T. foetus. No significant associations were found between the presence of T. foetus and the various risk factors examined. Conclusions T. foetus was found to be a common parasite in clinically healthy cats in Norway.

  12. Metabolic fingerprinting of fresh lymphoma samples used to discriminate between follicular and diffuse large B-cell lymphomas.

    Science.gov (United States)

    Barba, Ignasi; Sanz, Carolina; Barbera, Angels; Tapia, Gustavo; Mate, José-Luis; Garcia-Dorado, David; Ribera, Josep-Maria; Oriol, Albert

    2009-11-01

    To investigate if proton nuclear magnetic resonance ((1)H NMR) spectroscopy-based metabolic profiling was able to differentiate follicular lymphoma (FL) from diffuse large B-cell lymphoma (DLBCL) and to study which metabolites were responsible for the differences. High-resolution (1)H NMR spectra was obtained from fresh samples of lymph node biopsies obtained consecutively at one center (14 FL and 17 DLBCL). Spectra were processed using pattern-recognition methods. Discriminant models were able to differentiate between the two tumor types with a 86% sensitivity and a 76% specificity; the metabolites that most contributed to the discrimination were a relative increase of alanine in the case of DLBCL and a relative increase of taurine in FL. Metabolic models had a significant but weak correlation with Ki67 expression (r(2)=0.42; p=0.002) We have proved that it is possible to differentiate between FL and DLBCL based on their NMR metabolic profiles. This approach may potentially be applicable as a noninvasive tool for diagnostic and treatment follow-up in the clinical setting using conventional magnetic resonance systems.

  13. Large area gridded ionisation chamber and electrostatic precipitator. Application to low-level alphaspectrometry of environmental air samples

    International Nuclear Information System (INIS)

    Hoetzl, H.; Winkler, R.

    1978-01-01

    A high-resolution, parallel plate Frisch grid ionisation chamber with an efficient area of 300 cm 2 and a large area electrostatic precipitator were developed and applied to direct alpha-particle spectrometry of air dust. The aerosols were deposited on circular tin-plate dishes of 300 cm 2 by the electrostatic precipitator, which was constructed for continuous operation at an air flow rate of 2 m 3 /h. Collection efficiency is found to be 0.78 for the natural Rn- and Tn-daughter products. Using an argon-methane mixture (P-10 gas) at atmospheric pressure, the resolution of the detector system is 22 keV fwhm at 5.15 MeV. The integral background is typically 15.7 counts/h between 4 and 6 MeV. After sampling for one week and decay of short-lived natural activity, the sensitivity of the procedure for long-lived alpha-emitters is about 0.1 fCi/m 3 based on 3s of background as detection limit and 1000 min counting time. (Auth.)

  14. Enantioselective column coupled electrophoresis employing large bore capillaries hyphenated with tandem mass spectrometry for ultra-trace determination of chiral compounds in complex real samples.

    Science.gov (United States)

    Piešťanský, Juraj; Maráková, Katarína; Kovaľ, Marián; Havránek, Emil; Mikuš, Peter

    2015-12-01

    A new multidimensional analytical approach for the ultra-trace determination of target chiral compounds in unpretreated complex real samples was developed in this work. The proposed analytical system provided high orthogonality due to on-line combination of three different methods (separation mechanisms), i.e. (1) isotachophoresis (ITP), (2) chiral capillary zone electrophoresis (chiral CZE), and (3) triple quadrupole mass spectrometry (QqQ MS). The ITP step, performed in a large bore capillary (800 μm), was utilized for the effective sample pretreatment (preconcentration and matrix clean-up) in a large injection volume (1-10 μL) enabling to obtain as low as ca. 80 pg/mL limits of detection for the target enantiomers in urine matrices. In the chiral CZE step, the different chiral selectors (neutral, ionizable, and permanently charged cyclodextrins) and buffer systems were tested in terms of enantioselectivity and influence on the MS detection response. The performance parameters of the optimized ITP - chiral CZE-QqQ MS method were evaluated according to the FDA guidance for bioanalytical method validation. Successful validation and application (enantioselective monitoring of renally eliminated pheniramine and its metabolite in human urine) highlighted great potential of this chiral approach in advanced enantioselective biomedical applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Design of a Large-scale Three-dimensional Flexible Arrayed Tactile Sensor

    Directory of Open Access Journals (Sweden)

    Junxiang Ding

    2011-01-01

    Full Text Available This paper proposes a new type of large-scale three-dimensional flexible arrayed tactile sensor based on conductive rubber. It can be used to detect three-dimensional force information on the continuous surface of the sensor, which realizes a true skin type tactile sensor. The widely used method of liquid rubber injection molding (LIMS method is used for "the overall injection molding" sample preparation. The structure details of staggered nodes and a new decoupling algorithm of force analysis are given. Simulation results show that the sensor based on this structure can achieve flexible measurement of large-scale 3-D tactile sensor arrays.

  16. Respondent-Driven Sampling – Testing Assumptions: Sampling with Replacement

    Directory of Open Access Journals (Sweden)

    Barash Vladimir D.

    2016-03-01

    Full Text Available Classical Respondent-Driven Sampling (RDS estimators are based on a Markov Process model in which sampling occurs with replacement. Given that respondents generally cannot be interviewed more than once, this assumption is counterfactual. We join recent work by Gile and Handcock in exploring the implications of the sampling-with-replacement assumption for bias of RDS estimators. We differ from previous studies in examining a wider range of sampling fractions and in using not only simulations but also formal proofs. One key finding is that RDS estimates are surprisingly stable even in the presence of substantial sampling fractions. Our analyses show that the sampling-with-replacement assumption is a minor contributor to bias for sampling fractions under 40%, and bias is negligible for the 20% or smaller sampling fractions typical of field applications of RDS.

  17. Updated checklist and distribution of large branchiopods (Branchiopoda: Anostraca, Notostraca, Spinicaudata in Tunisia

    Directory of Open Access Journals (Sweden)

    Federico Marrone

    2016-10-01

    Full Text Available Temporary ponds are the most peculiar and representative water bodies in the arid and semi-arid regions of the world, where they often represent diversity hotspots that greatly contribute to the regional biodiversity. Being indissolubly linked to these ecosystems, the so-called “large branchiopods” are unanimously considered flagship taxa of these habitats. Nonetheless, updated and detailed information on large branchiopod faunas is still missing in many countries or regions. Based on an extensive bibliographical review and field samplings, we provide an updated and commented checklist of large branchiopods in Tunisia, one of the less investigated countries of the Maghreb as far as inland water crustaceans are concerned. We carried out a field survey from 2004 to 2012, thereby collecting 262 crustacean samples from a total of 177 temporary water bodies scattered throughout the country. Large branchiopod crustaceans were observed in 61% of the sampled sites, leading to the identification of fifteen species. Among these, the halophilic anostracan Branchinectella media is here reported for the first time for the country; conversely, four of the species reported in literature were not found during the present survey. Based on literature and novel data, the known large branchiopod fauna of Tunisia now includes 19 species, showing a noteworthy species richness when the limited extension of the country is considered. For each species, the regional distribution is described and an annotated list of references is provided. Under a conservation perspective, the particular importance of the temporary ponds occurring in the Medjerda river alluvial plain is further stressed. In this location, several large branchiopod taxa with different ecological requirements converge and form unique and species-rich assemblages that should be preserved.

  18. Simulating and assessing boson sampling experiments with phase-space representations

    Science.gov (United States)

    Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.

    2018-04-01

    The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.

  19. Chinese version of Impact of Weight on Quality of Life for Kids: psychometric properties in a large school-based sample.

    Science.gov (United States)

    He, Jinbo; Zhu, Hong; Luo, Xingwei; Cai, Taisheng; Wu, Siyao; Lu, Yao

    2016-06-01

    The Impact of Weight on Quality of Life for Kids (IWQOL-Kids) is the first self-report questionnaire for assessing weight-related quality of life for youth. However, there is no Chinese version of IWQOL-Kids. Thus, the objective of this research was to translate IWQOL-Kids into Mandarin and evaluate its psychometric properties in a large school-based sample. The total sample included 2282 participants aged 11-18 years old, including 1703 non-overweight, 386 overweight and 193 obese students. IWQOL-Kids was translated and culturally adapted by following the international guidelines for instrument linguistic validation procedures. The psychometric evaluation included internal consistency, test-retest reliability, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), convergent validity and discriminant validity. Cronbach's α for the Chinese version of IWQOL-Kids (IWQOL-Kids-C) was 0.956 and ranged from 0.891 to 0.927 for subscales. IWQOL-Kids-C showed a test-retest coefficient of 0.937 after 2 weeks and ranged from 0.847 to 0.903 for subscales. The original four-factor model was reproduced by EFA after seven iterations, accounting for 69.28% of the total variance. CFA demonstrated that the four-factor model had good fit indices with comparative fit index = 0.92, normed fit index = 0.91, goodness of fit index = 0.86, root mean square error of approximation = 0.07 and root mean square residual = 0.03. Convergent validity and discriminant validity were demonstrated with higher correlations between similar constructs and lower correlations between dissimilar constructs of IWQOL-Kids-C and PedsQL™ 4.0. The significant differences were found across the body mass index groups, and IWQOL-Kids-C had higher effect sizes than PedsQL™4.0 when comparing non-overweight and obese groups, supporting the sensitivity of IWQOL-Kids-C. IWQOL-Kids-C is a satisfactory, valid and reliable instrument to assess weight-related quality of life for Chinese children and

  20. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    International Nuclear Information System (INIS)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-01-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging

  1. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  2. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  3. Non-uniform sampling and wide range angular spectrum method

    International Nuclear Information System (INIS)

    Kim, Yong-Hae; Byun, Chun-Won; Oh, Himchan; Lee, JaeWon; Pi, Jae-Eun; Heon Kim, Gi; Lee, Myung-Lae; Ryu, Hojun; Chu, Hye-Yong; Hwang, Chi-Sun

    2014-01-01

    A novel method is proposed for simulating free space field propagation from a source plane to a destination plane that is applicable for both small and large propagation distances. The angular spectrum method (ASM) was widely used for simulating near field propagation, but it caused a numerical error when the propagation distance was large because of aliasing due to under sampling. Band limited ASM satisfied the Nyquist condition on sampling by limiting a bandwidth of a propagation field to avoid an aliasing error so that it could extend the applicable propagation distance of the ASM. However, the band limited ASM also made an error due to the decrease of an effective sampling number in a Fourier space when the propagation distance was large. In the proposed wide range ASM, we use a non-uniform sampling in a Fourier space to keep a constant effective sampling number even though the propagation distance is large. As a result, the wide range ASM can produce simulation results with high accuracy for both far and near field propagation. For non-paraxial wave propagation, we applied the wide range ASM to a shifted destination plane as well. (paper)

  4. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  5. Polymeric ionic liquid-based portable tip microextraction device for on-site sample preparation of water samples.

    Science.gov (United States)

    Chen, Lei; Pei, Junxian; Huang, Xiaojia; Lu, Min

    2018-06-05

    On-site sample preparation is highly desired because it avoids the transportation of large-volume samples and ensures the accuracy of the analytical results. In this work, a portable prototype of tip microextraction device (TMD) was designed and developed for on-site sample pretreatment. The assembly procedure of TMD is quite simple. Firstly, polymeric ionic liquid (PIL)-based adsorbent was in-situ prepared in a pipette tip. After that, the tip was connected with a syringe which was driven by a bidirectional motor. The flow rates in adsorption and desorption steps were controlled accurately by the motor. To evaluate the practicability of the developed device, the TMD was used to on-site sample preparation of waters and combined with high-performance liquid chromatography with diode array detection to measure trace estrogens in water samples. Under the most favorable conditions, the limits of detection (LODs, S/N = 3) for the target analytes were in the range of 4.9-22 ng/L, with good coefficients of determination. Confirmatory study well evidences that the extraction performance of TMD is comparable to that of the traditional laboratory solid-phase extraction process, but the proposed TMD is more simple and convenient. At the same time, the TMD avoids complicated sampling and transferring steps of large-volume water samples. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Sampling and Homogenization Strategies Significantly Influence the Detection of Foodborne Pathogens in Meat.

    Science.gov (United States)

    Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha

    2015-01-01

    Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety.

  8. Extra-large letter spacing improves reading in dyslexia

    Science.gov (United States)

    Zorzi, Marco; Barbiero, Chiara; Facoetti, Andrea; Lonciari, Isabella; Carrozzi, Marco; Montico, Marcella; Bravar, Laura; George, Florence; Pech-Georgel, Catherine; Ziegler, Johannes C.

    2012-01-01

    Although the causes of dyslexia are still debated, all researchers agree that the main challenge is to find ways that allow a child with dyslexia to read more words in less time, because reading more is undisputedly the most efficient intervention for dyslexia. Sophisticated training programs exist, but they typically target the component skills of reading, such as phonological awareness. After the component skills have improved, the main challenge remains (that is, reading deficits must be treated by reading more—a vicious circle for a dyslexic child). Here, we show that a simple manipulation of letter spacing substantially improved text reading performance on the fly (without any training) in a large, unselected sample of Italian and French dyslexic children. Extra-large letter spacing helps reading, because dyslexics are abnormally affected by crowding, a perceptual phenomenon with detrimental effects on letter recognition that is modulated by the spacing between letters. Extra-large letter spacing may help to break the vicious circle by rendering the reading material more easily accessible. PMID:22665803

  9. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    International Nuclear Information System (INIS)

    Rabiet, M.; Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M.

    2010-01-01

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10 -3 g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  10. Assessing pesticide concentrations and fluxes in the stream of a small vineyard catchment - Effect of sampling frequency

    Energy Technology Data Exchange (ETDEWEB)

    Rabiet, M., E-mail: marion.rabiet@unilim.f [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France); Margoum, C.; Gouy, V.; Carluer, N.; Coquery, M. [Cemagref, UR QELY, 3bis quai Chauveau, CP 220, F-69336 Lyon (France)

    2010-03-15

    This study reports on the occurrence and behaviour of six pesticides and one metabolite in a small stream draining a vineyard catchment. Base flow and flood events were monitored in order to assess the variability of pesticide concentrations according to the season and to evaluate the role of sampling frequency on the evaluation of fluxes estimates. Results showed that dissolved pesticide concentrations displayed a strong temporal and spatial variability. A large mobilisation of pesticides was observed during floods, with total dissolved pesticide fluxes per event ranging from 5.7 x 10{sup -3} g/Ha to 0.34 g/Ha. These results highlight the major role of floods in the transport of pesticides in this small stream which contributed to more than 89% of the total load of diuron during August 2007. The evaluation of pesticide loads using different sampling strategies and method calculation, showed that grab sampling largely underestimated pesticide concentrations and fluxes transiting through the stream. - This work brings new insights about the fluxes of pesticides in surface water of a vineyard catchment, notably during flood events.

  11. Age distribution of human gene families shows significant roles of both large- and small-scale duplications in vertebrate evolution.

    Science.gov (United States)

    Gu, Xun; Wang, Yufeng; Gu, Jianying

    2002-06-01

    The classical (two-round) hypothesis of vertebrate genome duplication proposes two successive whole-genome duplication(s) (polyploidizations) predating the origin of fishes, a view now being seriously challenged. As the debate largely concerns the relative merits of the 'big-bang mode' theory (large-scale duplication) and the 'continuous mode' theory (constant creation by small-scale duplications), we tested whether a significant proportion of paralogous genes in the contemporary human genome was indeed generated in the early stage of vertebrate evolution. After an extensive search of major databases, we dated 1,739 gene duplication events from the phylogenetic analysis of 749 vertebrate gene families. We found a pattern characterized by two waves (I, II) and an ancient component. Wave I represents a recent gene family expansion by tandem or segmental duplications, whereas wave II, a rapid paralogous gene increase in the early stage of vertebrate evolution, supports the idea of genome duplication(s) (the big-bang mode). Further analysis indicated that large- and small-scale gene duplications both make a significant contribution during the early stage of vertebrate evolution to build the current hierarchy of the human proteome.

  12. OSSOS. VI. Striking Biases in the Detection of Large Semimajor Axis Trans-Neptunian Objects

    Science.gov (United States)

    Shankman, Cory; Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett J.; Lawler, Samantha M.; Chen, Ying-Tung; Jakubik, Marian; Kaib, Nathan; Alexandersen, Mike; Gwyn, Stephen D. J.; Petit, Jean-Marc; Volk, Kathryn

    2017-08-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada–France–Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  13. Testing a groundwater sampling tool: Are the samples representative?

    International Nuclear Information System (INIS)

    Kaback, D.S.; Bergren, C.L.; Carlson, C.A.; Carlson, C.L.

    1989-01-01

    A ground water sampling tool, the HydroPunch trademark, was tested at the Department of Energy's Savannah River Site in South Carolina to determine if representative ground water samples could be obtained without installing monitoring wells. Chemical analyses of ground water samples collected with the HydroPunch trademark from various depths within a borehole were compared with chemical analyses of ground water from nearby monitoring wells. The site selected for the test was in the vicinity of a large coal storage pile and a coal pile runoff basin that was constructed to collect the runoff from the coal storage pile. Existing monitoring wells in the area indicate the presence of a ground water contaminant plume that: (1) contains elevated concentrations of trace metals; (2) has an extremely low pH; and (3) contains elevated concentrations of major cations and anions. Ground water samples collected with the HydroPunch trademark provide in excellent estimate of ground water quality at discrete depths. Groundwater chemical data collected from various depths using the HydroPunch trademark can be averaged to simulate what a screen zone in a monitoring well would sample. The averaged depth-discrete data compared favorably with the data obtained from the nearby monitoring wells

  14. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  15. Heterogeneous asymmetric recombinase polymerase amplification (haRPA) for rapid hygiene control of large-volume water samples.

    Science.gov (United States)

    Elsäßer, Dennis; Ho, Johannes; Niessner, Reinhard; Tiehm, Andreas; Seidel, Michael

    2018-04-01

    Hygiene of drinking water is periodically controlled by cultivation and enumeration of indicator bacteria. Rapid and comprehensive measurements of emerging pathogens are of increasing interest to improve drinking water safety. In this study, the feasibility to detect bacteriophage PhiX174 as a potential indicator for virus contamination in large volumes of water is demonstrated. Three consecutive concentration methods (continuous ultrafiltration, monolithic adsorption filtration, and centrifugal ultrafiltration) were combined to concentrate phages stepwise from 1250 L drinking water into 1 mL. Heterogeneous asymmetric recombinase polymerase amplification (haRPA) is applied as rapid detection method. Field measurements were conducted to test the developed system for hygiene online monitoring under realistic conditions. We could show that this system allows the detection of artificial contaminations of bacteriophage PhiX174 in drinking water pipelines. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.

    Science.gov (United States)

    Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D

    2016-04-01

    The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.

  17. The susceptibility of large river basins to orogenic and climatic drivers

    Science.gov (United States)

    Haedke, Hanna; Wittmann, Hella; von Blanckenburg, Friedhelm

    2017-04-01

    Large rivers are known to buffer pulses in sediment production driven by changes in climate as sediment is transported through lowlands. Our new dataset of in situ cosmogenic nuclide concentration and chemical composition of 62 sandy bedload samples from the world largest rivers integrates over 25% of Earth's terrestrial surface, distributed over a variety of climatic zones across all continents, and represents the millennial-scale denudation rate of the sediment's source area. We can show that these denudation rates do not respond to climatic forcing, but faithfully record orogenic forcing, when analyzed with respective variables representing orogeny (strain rate, relief, bouguer anomaly, free-air anomaly), and climate (runoff, temperature, precipitation) and basin properties (floodplain response time, drainage area). In contrast to this orogenic forcing of denudation rates, elemental bedload chemistry from the fine-grained portion of the same samples correlates with climate-related variables (precipitation, runoff) and floodplain response times. It is also well-known from previous compilations of river-gauged sediment loads that the short-term basin-integrated sediment export is also climatically controlled. The chemical composition of detrital sediment shows a climate control that can originate in the rivers source area, but this signal is likely overprinted during transfer through the lowlands because we also find correlation with floodplain response times. At the same time, cosmogenic nuclides robustly preserve the orogenic forcing of the source area denudation signal through of the floodplain buffer. Conversely, previous global compilations of cosmogenic nuclides in small river basins show the preservation of climate drivers in their analysis, but these are buffered in large lowland rivers. Hence, we can confirm the assumption that cosmogenic nuclides in large rivers are poorly susceptible to climate changes, but are at the same time highly suited to detect

  18. Development of a short sample test facility for evaluating superconducting wires

    International Nuclear Information System (INIS)

    Singh, M.R.; Kulkarni, D.G.; Sahni, V.C.; Ravikumar, G.; Patel, K.L.

    2002-01-01

    In this paper we describe a short sample test facility we have set up at Bhabha Atomic Research Centre (BARC). This facility has been used to measure critical currents of NbTi/Cu composite superconducting wires by recording V versus I data at 4.2 K. It offers sample current as large as 1500 A and a transverse magnetic field up to 7.4 T. A power law, V ∼I n( H) is fitted to the resistive transition region to estimate the exponent n, which is a measure of the uniformity of superconducting filaments in composite wires. It is observed that inadequate thermal stabilization of sample wire results in thermal runaway, which limits the V-I data to∼ 2μ V . This in turn affects the reliability of estimated filament uniformity. To mitigate this problem, we have used a sample holder made of OFHC-Cu which enhances thermal stabilization of the sample. With this sample holder, the results of measurements carried out on wires developed by the Atomic Fuel Division, BARC show a high filament uniformity (n ∼ 58). (author)

  19. Large scale study of tooth enamel

    International Nuclear Information System (INIS)

    Bodart, F.; Deconninck, G.; Martin, M.T.

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. On hundred eighty samples of teeth were first analyzed using PIXE, backscattering and nuclear reaction techniques. The results were analyzed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population. (author)

  20. CHARACTERIZATION AND ACTUAL WASTE TEST WITH TANK 5F SAMPLES

    International Nuclear Information System (INIS)

    Fletcher, D.

    2007-01-01

    The initial phase of bulk waste removal operations was recently completed in Tank 5F. Video inspection of the tank indicates several mounds of sludge still remain in the tank. Additionally, a mound of white solids was observed under Riser 5. In support of chemical cleaning and heel removal programs, samples of the sludge and the mound of white solids were obtained from the tank for characterization and testing. A core sample of the sludge and Super Snapper sample of the white solids were characterized. A supernate dip sample from Tank 7F was also characterized. A portion of the sludge was used in two tank cleaning tests using oxalic acid at 50 C and 75 C. The filtered oxalic acid from the tank cleaning tests was subsequently neutralized by addition to a simulated Tank 7F supernate. Solids and liquid samples from the tank cleaning test and neutralization test were characterized. A separate report documents the results of the gas generation from the tank cleaning test using oxalic acid and Tank 5F sludge. The characterization results for the Tank 5F sludge sample (FTF-05-06-55) appear quite good with respect to the tight precision of the sample replicates, good results for the glass standards, and minimal contamination found in the blanks and glass standards. The aqua regia and sodium peroxide fusion data also show good agreement between the two dissolution methods. Iron dominates the sludge composition with other major contributors being uranium, manganese, nickel, sodium, aluminum, and silicon. The low sodium value for the sludge reflects the absence of supernate present in the sample due to the core sampler employed for obtaining the sample. The XRD and CSEM results for the Super Snapper salt sample (i.e., white solids) from Tank 5F (FTF-05-07-1) indicate the material contains hydrated sodium carbonate and bicarbonate salts along with some aluminum hydroxide. These compounds likely precipitated from the supernate in the tank. A solubility test showed the material

  1. Large energy absorption in Ni-Mn-Ga/polymer composites

    International Nuclear Information System (INIS)

    Feuchtwanger, Jorge; Richard, Marc L.; Tang, Yun J.; Berkowitz, Ami E.; O'Handley, Robert C.; Allen, Samuel M.

    2005-01-01

    Ferromagnetic shape memory alloys can respond to a magnetic field or applied stress by the motion of twin boundaries and hence they show large hysteresis or energy loss. Ni-Mn-Ga particles made by spark erosion have been dispersed and oriented in a polymer matrix to form pseudo 3:1 composites which are studied under applied stress. Loss ratios have been determined from the stress-strain data. The loss ratios of the composites range from 63% to 67% compared to only about 17% for the pure, unfilled polymer samples

  2. Large-scale assessment of olfactory preferences and learning in Drosophila melanogaster: behavioral and genetic components

    Directory of Open Access Journals (Sweden)

    Elisabetta Versace

    2015-09-01

    Full Text Available In the Evolve and Resequence method (E&R, experimental evolution and genomics are combined to investigate evolutionary dynamics and the genotype-phenotype link. As other genomic approaches, this methods requires many replicates with large population sizes, which imposes severe restrictions on the analysis of behavioral phenotypes. Aiming to use E&R for investigating the evolution of behavior in Drosophila, we have developed a simple and effective method to assess spontaneous olfactory preferences and learning in large samples of fruit flies using a T-maze. We tested this procedure on (a a large wild-caught population and (b 11 isofemale lines of Drosophila melanogaster. Compared to previous methods, this procedure reduces the environmental noise and allows for the analysis of large population samples. Consistent with previous results, we show that flies have a preference for orange vs. apple odor. With our procedure wild-derived flies exhibit olfactory learning in the absence of previous laboratory selection. Furthermore, we find genetic differences in the olfactory learning with relatively high heritability. We propose this large-scale method as an effective tool for E&R and genome-wide association studies on olfactory preferences and learning.

  3. VOC contamination in hospital, from stationary sampling of a large panel of compounds, in view of healthcare workers and patients exposure assessment.

    Directory of Open Access Journals (Sweden)

    Vincent Bessonneau

    Full Text Available BACKGROUND: We aimed to assess, for the first time, the nature of the indoor air contamination of hospitals. METHODS AND FINDINGS: More than 40 volatile organic compounds (VOCs including aliphatic, aromatic and halogenated hydrocarbons, aldehydes, alcohols, ketones, ethers and terpenes were measured in a teaching hospital in France, from sampling in six sampling sites--reception hall, patient room, nursing care, post-anesthesia care unit, parasitology-mycology laboratory and flexible endoscope disinfection unit--in the morning and in the afternoon, during three consecutive days. Our results showed that the main compounds found in indoor air were alcohols (arithmetic means ± SD: 928±958 µg/m³ and 47.9±52.2 µg/m³ for ethanol and isopropanol, respectively, ethers (75.6±157 µg/m³ for ether and ketones (22.6±20.6 µg/m³ for acetone. Concentrations levels of aromatic and halogenated hydrocarbons, ketones, aldehydes and limonene were widely variable between sampling sites, due to building age and type of products used according to health activities conducted in each site. A high temporal variability was observed in concentrations of alcohols, probably due to the intensive use of alcohol-based hand rubs in all sites. Qualitative analysis of air samples led to the identification of other compounds, including siloxanes (hexamethyldisiloxane, octamethyltrisiloxane, decamethylcyclopentasiloxane, anesthetic gases (sevoflurane, desflurane, aliphatic hydrocarbons (butane, esters (ethylacetate, terpenes (camphor, α-bisabolol, aldehydes (benzaldehyde and organic acids (benzoic acid depending on sites. CONCLUSION: For all compounds, concentrations measured were lower than concentrations known to be harmful in humans. However, results showed that indoor air of sampling locations contains a complex mixture of VOCs. Further multicenter studies are required to compare these results. A full understanding of the exposure of healthcare workers and patients

  4. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  5. Intravascular Large B-Cell Lymphoma Presenting as Interstitial Lung Disease

    Directory of Open Access Journals (Sweden)

    Elham Vali Khojeini

    2014-01-01

    Full Text Available Intravascular large B-cell lymphoma (IVLBL is a rare subtype of diffuse large B-cell lymphoma that resides in the lumen of blood vessels. Patients typically present with nonspecific findings, particularly bizarre neurologic symptoms, fever, and skin lesions. A woman presented with shortness of breath and a chest CT scan showed diffuse interstitial thickening and ground glass opacities suggestive of an interstitial lung disease. On physical exam she was noted to have splenomegaly. The patient died and at autopsy was found to have an IVLBL in her lungs as well as nearly all her organs that were sampled. Although rare, IVLBL should be included in the differential diagnosis of interstitial lung disease and this case underscores the importance of the continuation of autopsies.

  6. Screening experiments of ecstasy street samples using near infrared spectroscopy.

    Science.gov (United States)

    Sondermann, N; Kovar, K A

    1999-12-20

    Twelve different sets of confiscated ecstasy samples were analysed applying both near infrared spectroscopy in reflectance mode (1100-2500 nm) and high-performance liquid chromatography (HPLC). The sets showed a large variance in composition. A calibration data set was generated based on the theory of factorial designs. It contained 221 N-methyl-3,4-methylenedioxyamphetamine (MDMA) samples, 167 N-ethyl-3,4-methylenedioxyamphetamine (MDE), 111 amphetamine and 106 samples without a controlled substance, which will be called placebo samples thereafter. From this data set, PLS-1 models were calculated and were successfully applied for validation of various external laboratory test sets. The transferability of these results to confiscated tablets is demonstrated here. It is shown that differentiation into placebo, amphetamine and ecstasy samples is possible. Analysis of intact tablets is practicable. However, more reliable results are obtained from pulverised samples. This is due to ill-defined production procedures. The use of mathematically pretreated spectra improves the prediction quality of all the PLS-1 models studied. It is possible to improve discrimination between MDE and MDMA with the help of a second model based on raw spectra. Alternative strategies are briefly discussed.

  7. Sample preparation prior to the LC-MS-based metabolomics/metabonomics of blood-derived samples.

    Science.gov (United States)

    Gika, Helen; Theodoridis, Georgios

    2011-07-01

    Blood represents a very important biological fluid and has been the target of continuous and extensive research for diagnostic, or health and drug monitoring reasons. Recently, metabonomics/metabolomics have emerged as a new and promising 'omics' platform that shows potential in biomarker discovery, especially in areas such as disease diagnosis, assessment of drug efficacy or toxicity. Blood is collected in various establishments in conditions that are not standardized. Next, the samples are prepared and analyzed using different methodologies or tools. When targeted analysis of key molecules (e.g., a drug or its metabolite[s]) is the aim, enforcement of certain measures or additional analyses may correct and harmonize these discrepancies. In omics fields such as those performed by holistic analytical approaches, no such rules or tools are available. As a result, comparison or correlation of results or data fusion becomes impractical. However, it becomes evident that such obstacles should be overcome in the near future to allow for large-scale studies that involve the assaying of samples from hundreds of individuals. In this case the effect of sample handling and preparation becomes very serious, in order to avoid wasting months of work from experts and expensive instrument time. The present review aims to cover the different methodologies applied to the pretreatment of blood prior to LC-MS metabolomic/metabonomic studies. The article tries to critically compare the methods and highlight issues that need to be addressed.

  8. Further examination of embedded performance validity indicators for the Conners' Continuous Performance Test and Brief Test of Attention in a large outpatient clinical sample.

    Science.gov (United States)

    Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A

    2018-01-01

    Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.

  9. Prediction of the number of 14 MeV neutron elastically scattered from large sample of aluminium using Monte Carlo simulation method

    International Nuclear Information System (INIS)

    Husin Wagiran; Wan Mohd Nasir Wan Kadir

    1997-01-01

    In neutron scattering processes, the effect of multiple scattering is to cause an effective increase in the measured cross-sections due to increase on the probability of neutron scattering interactions in the sample. Analysis of how the effective cross-section varies with thickness is very complicated due to complicated sample geometries and the variations of scattering cross-section with energy. Monte Carlo method is one of the possible method for treating the multiple scattering processes in the extended sample. In this method a lot of approximations have to be made and the accurate data of microscopic cross-sections are needed at various angles. In the present work, a Monte Carlo simulation programme suitable for a small computer was developed. The programme was capable to predict the number of neutrons scattered from various thickness of aluminium samples at all possible angles between 00 to 36011 with 100 increments. In order to make the the programme not too complicated and capable of being run on microcomputer with reasonable time, the calculations was done in two dimension coordinate system. The number of neutrons predicted from this model show in good agreement with previous experimental results

  10. Analysis of submicrogram samples by INAA

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, D J [National Aeronautics and Space Administration, Houston, TX (USA). Lyndon B. Johnson Space Center

    1990-12-20

    Procedure have been developed to increase the sensitivity of instrumental neutron activation analysis (INAA) so that cosmic-dust samples weighing only 10{sup -9}-10{sup -7} g are routinely analyzed for a sizable number of elements. The primary differences from standard techniques are: (1) irradiation of the samples is much more intense, (2) gamma ray assay of the samples is done using long counting times and large Ge detectors that are operated in an excellent low-background facility, (3) specially prepared glass standards are used, (4) samples are too small to be weighed routinely and concentrations must be obtained indirectly, (5) sample handling is much more difficult, and contamination of small samples with normally insignificant amounts of contaminants is difficult to prevent. In spite of the difficulties, INAA analyses have been done on 15 cosmic-dust particles and a large number of other stratospheric particles. Two-sigma detection limits for some elements are in the range of femtograms (10{sup -15} g), e.g. Co=11, Sc=0.9, Sm=0.2 A particle weighing just 0.2 ng was analyzed, obtaining abundances with relative analytical uncertainties of less than 10% for four elements (Fe, Co, Ni and Sc), which were sufficient to allow identification of the particle as chondritic interplanetary dust. Larger samples allow abundances of twenty or more elements to be obtained. (orig.).

  11. Detailed deposition density maps constructed by large-scale soil sampling for gamma-ray emitting radioactive nuclides from the Fukushima Dai-ichi Nuclear Power Plant accident.

    Science.gov (United States)

    Saito, Kimiaki; Tanihata, Isao; Fujiwara, Mamoru; Saito, Takashi; Shimoura, Susumu; Otsuka, Takaharu; Onda, Yuichi; Hoshi, Masaharu; Ikeuchi, Yoshihiro; Takahashi, Fumiaki; Kinouchi, Nobuyuki; Saegusa, Jun; Seki, Akiyuki; Takemiya, Hiroshi; Shibata, Tokushi

    2015-01-01

    Soil deposition density maps of gamma-ray emitting radioactive nuclides from the Fukushima Dai-ichi Nuclear Power Plant (NPP) accident were constructed on the basis of results from large-scale soil sampling. In total 10,915 soil samples were collected at 2168 locations. Gamma rays emitted from the samples were measured by Ge detectors and analyzed using a reliable unified method. The determined radioactivity was corrected to that of June 14, 2011 by considering the intrinsic decay constant of each nuclide. Finally the deposition maps were created for (134)Cs, (137)Cs, (131)I, (129m)Te and (110m)Ag. The radioactivity ratio of (134)Cs-(137)Cs was almost constant at 0.91 regardless of the locations of soil sampling. The radioactivity ratios of (131)I and (129m)Te-(137)Cs were relatively high in the regions south of the Fukushima NPP site. Effective doses for 50 y after the accident were evaluated for external and inhalation exposures due to the observed radioactive nuclides. The radiation doses from radioactive cesium were found to be much higher than those from the other radioactive nuclides. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Successful application of FTA Classic Card technology and use of bacteriophage phi29 DNA polymerase for large-scale field sampling and cloning of complete maize streak virus genomes.

    Science.gov (United States)

    Owor, Betty E; Shepherd, Dionne N; Taylor, Nigel J; Edema, Richard; Monjane, Adérito L; Thomson, Jennifer A; Martin, Darren P; Varsani, Arvind

    2007-03-01

    Leaf samples from 155 maize streak virus (MSV)-infected maize plants were collected from 155 farmers' fields in 23 districts in Uganda in May/June 2005 by leaf-pressing infected samples onto FTA Classic Cards. Viral DNA was successfully extracted from cards stored at room temperature for 9 months. The diversity of 127 MSV isolates was analysed by PCR-generated RFLPs. Six representative isolates having different RFLP patterns and causing either severe, moderate or mild disease symptoms, were chosen for amplification from FTA cards by bacteriophage phi29 DNA polymerase using the TempliPhi system. Full-length genomes were inserted into a cloning vector using a unique restriction enzyme site, and sequenced. The 1.3-kb PCR product amplified directly from FTA-eluted DNA and used for RFLP analysis was also cloned and sequenced. Comparison of cloned whole genome sequences with those of the original PCR products indicated that the correct virus genome had been cloned and that no errors were introduced by the phi29 polymerase. This is the first successful large-scale application of FTA card technology to the field, and illustrates the ease with which large numbers of infected samples can be collected and stored for downstream molecular applications such as diversity analysis and cloning of potentially new virus genomes.

  13. The Sex, Age, and Me study: recruitment and sampling for a large mixed-methods study of sexual health and relationships in an older Australian population.

    Science.gov (United States)

    Lyons, Anthony; Heywood, Wendy; Fileborn, Bianca; Minichiello, Victor; Barrett, Catherine; Brown, Graham; Hinchliff, Sharron; Malta, Sue; Crameri, Pauline

    2017-09-01

    Older people are often excluded from large studies of sexual health, as it is assumed that they are not having sex or are reluctant to talk about sensitive topics and are therefore difficult to recruit. We outline the sampling and recruitment strategies from a recent study on sexual health and relationships among older people. Sex, Age and Me was a nationwide Australian study that examined sexual health, relationship patterns, safer-sex practices and STI knowledge of Australians aged 60 years and over. The study used a mixed-methods approach to establish baseline levels of knowledge and to develop deeper insights into older adult's understandings and practices relating to sexual health. Data collection took place in 2015, with 2137 participants completing a quantitative survey and 53 participating in one-on-one semi-structured interviews. As the feasibility of this type of study has been largely untested until now, we provide detailed information on the study's recruitment strategies and methods. We also compare key characteristics of our sample with national estimates to assess its degree of representativeness. This study provides evidence to challenge the assumptions that older people will not take part in sexual health-related research and details a novel and successful way to recruit participants in this area.

  14. Identifying temporal bottlenecks for the conservation of large-bodied fishes: Lake Sturgeon (Acipenser fulvescens show highly restricted movement and habitat use over-winter

    Directory of Open Access Journals (Sweden)

    Donnette Thayer

    2017-04-01

    Full Text Available The relationship between species’ size and home range size has been well studied. In practice, home range may provide a good surrogate of broad spatial coverage needed for species conservation, however, many species can show restricted movement during critical life stages, such as breeding and over-wintering. This suggests the existence of either a behavioral or habitat mediated ‘temporal bottleneck,’ where restricted or sedentary movement can make populations more susceptible to harm during specific life stages. Here, we study over-winter movement and habitat use of Lake Sturgeon (Acipenser fulvescens, the largest freshwater fish in North America. We monitored over-winter movement of 86 fish using a hydro-acoustic receiver array in the South Saskatchewan River, Canada. Overall, 20 fish remained within our study system throughout the winter. Lake Sturgeon showed strong aggregation and sedentary movement over-winter, demonstrating a temporal bottleneck. Movement was highly restricted during ice-on periods (ranging from 0.9 km/day in November and April to 0.2 km/day in mid-November to mid-March, with Lake Sturgeon seeking deeper, slower pools. We also show that Lake Sturgeon have strong aggregation behavior, where distance to conspecifics decreased (from 575 to 313 m in preparation for and during ice-on periods. Although the Lake Sturgeon we studied had access to 1100 kilometers of unfragmented riverine habitat, we show that during the over-winter period Lake Sturgeon utilized a single, deep pool (<0.1% of available habitat. The temporal discrepancy between mobile and sedentary behaviors in Lake Sturgeon suggest adaptive management is needed with more localized focus during periods of temporal bottlenecks, even for large-bodied species.

  15. A low-volume cavity ring-down spectrometer for sample-limited applications

    Science.gov (United States)

    Stowasser, C.; Farinas, A. D.; Ware, J.; Wistisen, D. W.; Rella, C.; Wahl, E.; Crosson, E.; Blunier, T.

    2014-08-01

    In atmospheric and environmental sciences, optical spectrometers are used for the measurements of greenhouse gas mole fractions and the isotopic composition of water vapor or greenhouse gases. The large sample cell volumes (tens of milliliters to several liters) in commercially available spectrometers constrain the usefulness of such instruments for applications that are limited in sample size and/or need to track fast variations in the sample stream. In an effort to make spectrometers more suitable for sample-limited applications, we developed a low-volume analyzer capable of measuring mole fractions of methane and carbon monoxide based on a commercial cavity ring-down spectrometer. The instrument has a small sample cell (9.6 ml) and can selectively be operated at a sample cell pressure of 140, 45, or 20 Torr (effective internal volume of 1.8, 0.57, and 0.25 ml). We present the new sample cell design and the flow path configuration, which are optimized for small sample sizes. To quantify the spectrometer's usefulness for sample-limited applications, we determine the renewal rate of sample molecules within the low-volume spectrometer. Furthermore, we show that the performance of the low-volume spectrometer matches the performance of the standard commercial analyzers by investigating linearity, precision, and instrumental drift.

  16. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Science.gov (United States)

    Bányai, Fanni; Zsila, Ágnes; Király, Orsolya; Maraz, Aniko; Elekes, Zsuzsanna; Griffiths, Mark D; Andreassen, Cecilie Schou; Demetrovics, Zsolt

    2017-01-01

    Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic) use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD). Using the Bergen Social Media Addiction Scale (BSMAS) and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  17. Major and trace elements in geological samples from Itingussu Basin in Coroa-Grande, RJ

    Energy Technology Data Exchange (ETDEWEB)

    Araripe, Denise R. [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Inst. de Quimica. Dept. de Quimica Analitica; E-mail: drararipe@vm.uff.br; Bellido, Alfredo V.B.; Canesim, Fatima [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Inst. de Quimica. Dept. de Fisico-Quimica; Patchineelam, Sambasiva R.; Machdo, Edimar [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Dept. de Geoquimica; Bellido, Luis F. [Instituto de Engenharia Nuclear IEN, Rio de Janeiro, RJ (Brazil); Vasconcelos, Marina B.A. [Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, SP (Brazil)

    2005-07-01

    The goal of the present work was to characterize soil samples and sediment of mangrove belong the Itingussu river drainage basin, with a view to investigate the lithological signature of it. This small drainage ends in area not yet largely impacted by other sources such as industrial and domestic waste in relation to the elements studied here. The results showed some enrichment of the U,Th and some light rare earth elements in the Itingussu sediment sample. This represent the leucocratic rock signature, according to the normalized of data by upper crustal mean values . (author)

  18. Major and trace elements in geological samples from Itingussu Basin in Coroa-Grande, RJ

    International Nuclear Information System (INIS)

    Araripe, Denise R.; Vasconcelos, Marina B.A.

    2005-01-01

    The goal of the present work was to characterize soil samples and sediment of mangrove belong the Itingussu river drainage basin, with a view to investigate the lithological signature of it. This small drainage ends in area not yet largely impacted by other sources such as industrial and domestic waste in relation to the elements studied here. The results showed some enrichment of the U,Th and some light rare earth elements in the Itingussu sediment sample. This represent the leucocratic rock signature, according to the normalized of data by upper crustal mean values . (author)

  19. What Types of Pornography Do People Find Arousing and Do They Cluster? Assessing Types and Categories of Pornography in a Large-Scale Online Sample.

    Science.gov (United States)

    Hald, Gert Martin; Štulhofer, Aleksandar

    2016-09-01

    Previous research on exposure to different types of pornography has primarily relied on analyses of millions of search terms and histories or on user exposure patterns within a given time period rather than the self-reported frequency of consumption. Further, previous research has almost exclusively relied on theoretical or ad hoc overarching categorizations of different types of pornography, when investigating patterns of pornography exposure, rather than latent structure analyses of these exposure patterns. In contrast, using a large sample of 18- to 40-year-old heterosexual and nonheterosexual Croatian men and women, this study investigated the self-reported frequency of using 27 different types of pornography and statistically explored their latent structures. The results showed substantial differences in consumption patterns across gender and sexual orientation. However, latent structure analyses of the 27 different types of pornography assessed suggested that although several categories of consumption were gender and sexual orientation specific, common categories across the different types of pornography could be established. Based on this finding, a five-item scale was proposed to indicate the use of nonmainstream (paraphilic) pornographic content, as this type of pornography has often been targeted in previous research. To the best of our knowledge, no similar measurement tool has been proposed before.

  20. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    Science.gov (United States)

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Sampling scales define occupancy and underlying occupancy-abundance relationships in animals.

    Science.gov (United States)

    Steenweg, Robin; Hebblewhite, Mark; Whittington, Jesse; Lukacs, Paul; McKelvey, Kevin

    2018-01-01

    Occupancy-abundance (OA) relationships are a foundational ecological phenomenon and field of study, and occupancy models are increasingly used to track population trends and understand ecological interactions. However, these two fields of ecological inquiry remain largely isolated, despite growing appreciation of the importance of integration. For example, using occupancy models to infer trends in abundance is predicated on positive OA relationships. Many occupancy studies collect data that violate geographical closure assumptions due to the choice of sampling scales and application to mobile organisms, which may change how occupancy and abundance are related. Little research, however, has explored how different occupancy sampling designs affect OA relationships. We develop a conceptual framework for understanding how sampling scales affect the definition of occupancy for mobile organisms, which drives OA relationships. We explore how spatial and temporal sampling scales, and the choice of sampling unit (areal vs. point sampling), affect OA relationships. We develop predictions using simulations, and test them using empirical occupancy data from remote cameras on 11 medium-large mammals. Surprisingly, our simulations demonstrate that when using point sampling, OA relationships are unaffected by spatial sampling grain (i.e., cell size). In contrast, when using areal sampling (e.g., species atlas data), OA relationships are affected by spatial grain. Furthermore, OA relationships are also affected by temporal sampling scales, where the curvature of the OA relationship increases with temporal sampling duration. Our empirical results support these predictions, showing that at any given abundance, the spatial grain of point sampling does not affect occupancy estimates, but longer surveys do increase occupancy estimates. For rare species (low occupancy), estimates of occupancy will quickly increase with longer surveys, even while abundance remains constant. Our results

  2. Acquisition and preparation of specimens of rock for large-scale testing

    International Nuclear Information System (INIS)

    Watkins, D.J.

    1981-01-01

    The techniques used for acquisition and preparation of large specimens of rock for laboratory testing depend upon the location of the specimen, the type of rock and the equipment available at the sampling site. Examples are presented to illustrate sampling and preparation techniques used for two large cylindrical samples of granitic material, one pervasively fractured and one containing a single fracture

  3. An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians

    International Nuclear Information System (INIS)

    Hughes, Ciaran; Mehta, Dhagash; Wales, David J.

    2014-01-01

    Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems

  4. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed

    2017-08-28

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  5. Screen-Space Normal Distribution Function Caching for Consistent Multi-Resolution Rendering of Large Particle Data

    KAUST Repository

    Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus

    2017-01-01

    Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.

  6. Advancing the Use of Passive Sampling in Risk Assessment and Management of Sediments Contaminated with Hydrophobic Organic Chemicals: Results of an International Ex Situ Passive Sampling Interlaboratory Comparison.

    Science.gov (United States)

    Jonker, Michiel T O; van der Heijden, Stephan A; Adelman, Dave; Apell, Jennifer N; Burgess, Robert M; Choi, Yongju; Fernandez, Loretta A; Flavetta, Geanna M; Ghosh, Upal; Gschwend, Philip M; Hale, Sarah E; Jalalizadeh, Mehregan; Khairy, Mohammed; Lampi, Mark A; Lao, Wenjian; Lohmann, Rainer; Lydy, Michael J; Maruya, Keith A; Nutile, Samuel A; Oen, Amy M P; Rakowska, Magdalena I; Reible, Danny; Rusina, Tatsiana P; Smedes, Foppe; Wu, Yanwen

    2018-03-20

    This work presents the results of an international interlaboratory comparison on ex situ passive sampling in sediments. The main objectives were to map the state of the science in passively sampling sediments, identify sources of variability, provide recommendations and practical guidance for standardized passive sampling, and advance the use of passive sampling in regulatory decision making by increasing confidence in the use of the technique. The study was performed by a consortium of 11 laboratories and included experiments with 14 passive sampling formats on 3 sediments for 25 target chemicals (PAHs and PCBs). The resulting overall interlaboratory variability was large (a factor of ∼10), but standardization of methods halved this variability. The remaining variability was primarily due to factors not related to passive sampling itself, i.e., sediment heterogeneity and analytical chemistry. Excluding the latter source of variability, by performing all analyses in one laboratory, showed that passive sampling results can have a high precision and a very low intermethod variability (sampling, irrespective of the specific method used, is fit for implementation in risk assessment and management of contaminated sediments, provided that method setup and performance, as well as chemical analyses are quality-controlled.

  7. What Are We Drinking? Beverages Shown in Adolescents' Favorite Television Shows.

    Science.gov (United States)

    Eisenberg, Marla E; Larson, Nicole I; Gollust, Sarah E; Neumark-Sztainer, Dianne

    2017-05-01

    Media use has been shown to contribute to poor dietary intake; however, little attention has been paid to programming content. The portrayal of health behaviors in television (TV) programming contributes to social norms among viewers, which have been shown to influence adolescent behavior. This study reports on a content analysis of beverages shown in a sample of TV shows popular with a large, diverse group of adolescents, with attention to the types of beverages and differences across shows and characters. Favorite TV shows were assessed in an in-school survey in 2010. Three episodes of each of the top 25 shows were analyzed, using a detailed coding instrument. Beverage incidents (ie, beverage shown or described) were recorded. Beverage types included milk, sugar-sweetened beverages (SSBs), diet beverages, juice, water, alcoholic drinks, and coffee. Characters were coded with regard to gender, age group, race, and weight status. Shows were rated for a youth, general, or adult audience. χ 2 tests were used to compare the prevalence of each type of beverage across show ratings (youth, general, adult), and to compare characteristics of those involved in each type of beverage incident. Beverage incidents were common (mean=7.4 incidents/episode, range=0 to 25). Alcohol was the most commonly shown (38.8%); milk (5.8%) and juice (5.8%) were least common; 11.0% of incidents included SSBs. Significant differences in all types of beverage were found across characters' age groups. Almost half of young adults' (49.2%) or adults' (42.0%) beverage incidents included alcohol. Beverages are often portrayed on TV shows viewed by adolescents, and common beverages (alcohol, SSBs) may have adverse consequences for health. The portrayal of these beverages likely contributes to social norms regarding their desirability; nutrition and health professionals should talk with youth about TV portrayals to prevent the adoption of unhealthy beverage behaviors. Copyright © 2017 Academy of

  8. What are we drinking? Beverages shown in adolescents’ favorite TV shows

    Science.gov (United States)

    Eisenberg, Marla E.; Larson, Nicole I.; Gollust, Sarah E.; Neumark-Sztainer, Dianne

    2016-01-01

    Background Media use has been shown to contribute to poor dietary intake; however, little attention has been paid to programming content. The portrayal of health behaviors in television (TV) programming contributes to social norms among viewers, which have been shown to influence adolescent behavior. Objective This study reports on a content analysis of beverages shown in a sample of TV shows popular with a large, diverse group of adolescents, with attention to the types of beverages and differences across shows and characters. Design Favorite TV shows were assessed in an in-school survey in 2010. Three episodes of each of the top 25 shows were analyzed using a detailed coding instrument. Key measures Beverage incidents (i.e. beverage shown or described) were recorded. Beverage types included milk, sugar-sweetened beverages (SSB), diet beverages, juice, water, alcoholic drinks and coffee. Characters were coded with regards to gender, age group, race, and weight status. Shows were rated for a youth, general or adult audience. Statistical analyses Chi-square tests were used to compare the prevalence of each type of beverage across show ratings (youth, general, adult), and to compare characteristics of those involved in each type of beverage incident. Results Beverage incidents were common (mean=7.4 incidents/episode, range=0–25). Alcohol was the most commonly shown (38.8%); milk (5.8%) and juice (5.8%) were least common; 11.0% of incidents included SSB. Significant differences in all types of beverage were found across age groups. Almost half of young adults’ (49.2%) or adults’ (42.0%) beverage incidents included alcohol. Conclusions Beverages are often portrayed on TV shows viewed by adolescents, and common beverages (alcohol, SSB) may have adverse consequences for health. The portrayal of these beverages likely contributes to social norms regarding their desirability; nutrition and health professionals should talk with youth about TV portrayals to prevent the

  9. Network and adaptive sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Combining the two statistical techniques of network sampling and adaptive sampling, this book illustrates the advantages of using them in tandem to effectively capture sparsely located elements in unknown pockets. It shows how network sampling is a reliable guide in capturing inaccessible entities through linked auxiliaries. The text also explores how adaptive sampling is strengthened in information content through subsidiary sampling with devices to mitigate unmanageable expanding sample sizes. Empirical data illustrates the applicability of both methods.

  10. Mind-Body Practice and Body Weight Status in a Large Population-Based Sample of Adults.

    Science.gov (United States)

    Camilleri, Géraldine M; Méjean, Caroline; Bellisle, France; Hercberg, Serge; Péneau, Sandrine

    2016-04-01

    In industrialized countries characterized by a high prevalence of obesity and chronic stress, mind-body practices such as yoga or meditation may facilitate body weight control. However, virtually no data are available to ascertain whether practicing mind-body techniques is associated with weight status. The purpose of this study is to examine the relationship between the practice of mind-body techniques and weight status in a large population-based sample of adults. A total of 61,704 individuals aged ≥18 years participating in the NutriNet-Santé study (2009-2014) were included in this cross-sectional analysis conducted in 2014. Data on mind-body practices were collected, as well as self-reported weight and height. The association between the practice of mind-body techniques and weight status was assessed using multiple linear and multinomial logistic regression models adjusted for sociodemographic, lifestyle, and dietary factors. After adjusting for sociodemographic and lifestyle factors, regular users of mind-body techniques were less likely to be overweight (OR=0.68, 95% CI=0.63, 0.74) or obese (OR=0.55, 95% CI=0.50, 0.61) than never users. In addition, regular users had a lower BMI than never users (-3.19%, 95% CI=-3.71, -2.68). These data provide novel information about an inverse relationship between mind-body practice and weight status. If causal links were demonstrated in further prospective studies, such practice could be fostered in obesity prevention and treatment. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  11. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Directory of Open Access Journals (Sweden)

    Fanni Bányai

    Full Text Available Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD. Using the Bergen Social Media Addiction Scale (BSMAS and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  12. The iPSYCH2012 case-cohort sample

    DEFF Research Database (Denmark)

    Pedersen, C B; Bybjerg-Grauholm, J; Pedersen, M G

    2018-01-01

    The Integrative Psychiatric Research (iPSYCH) consortium has established a large Danish population-based Case-Cohort sample (iPSYCH2012) aimed at unravelling the genetic and environmental architecture of severe mental disorders. The iPSYCH2012 sample is nested within the entire Danish population...

  13. A novel storage system for cryoEM samples.

    Science.gov (United States)

    Scapin, Giovanna; Prosise, Winifred W; Wismer, Michael K; Strickland, Corey

    2017-07-01

    We present here a new CryoEM grid boxes storage system designed to simplify sample labeling, tracking and retrieval. The system is based on the crystal pucks widely used by the X-ray crystallographic community for storage and shipping of crystals. This system is suitable for any cryoEM laboratory, but especially for large facilities that will need accurate tracking of large numbers of samples coming from different sources. Copyright © 2017. Published by Elsevier Inc.

  14. The Lyman alpha reference sample. II. Hubble space telescope imaging results, integrated properties, and trends

    Energy Technology Data Exchange (ETDEWEB)

    Hayes, Matthew; Östlin, Göran; Duval, Florent; Sandberg, Andreas; Guaita, Lucia; Melinder, Jens; Rivera-Thorsen, Thøger [Department of Astronomy, Oskar Klein Centre, Stockholm University, AlbaNova University Centre, SE-106 91 Stockholm (Sweden); Adamo, Angela [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Schaerer, Daniel [Université de Toulouse, UPS-OMP, IRAP, F-31000 Toulouse (France); Verhamme, Anne; Orlitová, Ivana [Geneva Observatory, University of Geneva, 51 Chemin des Maillettes, CH-1290 Versoix (Switzerland); Mas-Hesse, J. Miguel; Otí-Floranes, Héctor [Centro de Astrobiología (CSIC-INTA), Departamento de Astrofísica, P.O. Box 78, E-28691 Villanueva de la Cañada (Spain); Cannon, John M.; Pardy, Stephen [Department of Physics and Astronomy, Macalester College, 1600 Grand Avenue, Saint Paul, MN 55105 (United States); Atek, Hakim [Laboratoire dAstrophysique, École Polytechnique Fédérale de Lausanne (EPFL), Observatoire, CH-1290 Sauverny (Switzerland); Kunth, Daniel [Institut d' Astrophysique de Paris, UMR 7095, CNRS and UPMC, 98 bis Bd Arago, F-75014 Paris (France); Laursen, Peter [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Herenz, E. Christian, E-mail: matthew@astro.su.se [Leibniz-Institut für Astrophysik (AIP), An der Sternwarte 16, D-14482 Potsdam (Germany)

    2014-02-10

    We report new results regarding the Lyα output of galaxies, derived from the Lyman Alpha Reference Sample, and focused on Hubble Space Telescope imaging. For 14 galaxies we present intensity images in Lyα, Hα, and UV, and maps of Hα/Hβ, Lyα equivalent width (EW), and Lyα/Hα. We present Lyα and UV radial light profiles and show they are well-fitted by Sérsic profiles, but Lyα profiles show indices systematically lower than those of the UV (n ≈ 1-2 instead of ≳ 4). This reveals a general lack of the central concentration in Lyα that is ubiquitous in the UV. Photometric growth curves increase more slowly for Lyα than the far ultraviolet, showing that small apertures may underestimate the EW. For most galaxies, however, flux and EW curves flatten by radii ≈10 kpc, suggesting that if placed at high-z only a few of our galaxies would suffer from large flux losses. We compute global properties of the sample in large apertures, and show total Lyα luminosities to be independent of all other quantities. Normalized Lyα throughput, however, shows significant correlations: escape is found to be higher in galaxies of lower star formation rate, dust content, mass, and nebular quantities that suggest harder ionizing continuum and lower metallicity. Six galaxies would be selected as high-z Lyα emitters, based upon their luminosity and EW. We discuss the results in the context of high-z Lyα and UV samples. A few galaxies have EWs above 50 Å, and one shows f{sub esc}{sup Lyα} of 80%; such objects have not previously been reported at low-z.

  15. The Lyman alpha reference sample. II. Hubble space telescope imaging results, integrated properties, and trends

    International Nuclear Information System (INIS)

    Hayes, Matthew; Östlin, Göran; Duval, Florent; Sandberg, Andreas; Guaita, Lucia; Melinder, Jens; Rivera-Thorsen, Thøger; Adamo, Angela; Schaerer, Daniel; Verhamme, Anne; Orlitová, Ivana; Mas-Hesse, J. Miguel; Otí-Floranes, Héctor; Cannon, John M.; Pardy, Stephen; Atek, Hakim; Kunth, Daniel; Laursen, Peter; Herenz, E. Christian

    2014-01-01

    We report new results regarding the Lyα output of galaxies, derived from the Lyman Alpha Reference Sample, and focused on Hubble Space Telescope imaging. For 14 galaxies we present intensity images in Lyα, Hα, and UV, and maps of Hα/Hβ, Lyα equivalent width (EW), and Lyα/Hα. We present Lyα and UV radial light profiles and show they are well-fitted by Sérsic profiles, but Lyα profiles show indices systematically lower than those of the UV (n ≈ 1-2 instead of ≳ 4). This reveals a general lack of the central concentration in Lyα that is ubiquitous in the UV. Photometric growth curves increase more slowly for Lyα than the far ultraviolet, showing that small apertures may underestimate the EW. For most galaxies, however, flux and EW curves flatten by radii ≈10 kpc, suggesting that if placed at high-z only a few of our galaxies would suffer from large flux losses. We compute global properties of the sample in large apertures, and show total Lyα luminosities to be independent of all other quantities. Normalized Lyα throughput, however, shows significant correlations: escape is found to be higher in galaxies of lower star formation rate, dust content, mass, and nebular quantities that suggest harder ionizing continuum and lower metallicity. Six galaxies would be selected as high-z Lyα emitters, based upon their luminosity and EW. We discuss the results in the context of high-z Lyα and UV samples. A few galaxies have EWs above 50 Å, and one shows f esc Lyα of 80%; such objects have not previously been reported at low-z.

  16. Relation of average and highest solvent vapor concentrations in workplaces in small to medium enterprises and large enterprises.

    Science.gov (United States)

    Ukai, Hirohiko; Ohashi, Fumiko; Samoto, Hajime; Fukui, Yoshinari; Okamoto, Satoru; Moriguchi, Jiro; Ezaki, Takafumi; Takada, Shiro; Ikeda, Masayuki

    2006-04-01

    The present study was initiated to examine the relationship between the workplace concentrations and the estimated highest concentrations in solvent workplaces (SWPs), with special references to enterprise size and types of solvent work. Results of survey conducted in 1010 SWPs in 156 enterprises were taken as a database. Workplace air was sampled at > or = 5 crosses in each SWP following a grid sampling strategy. An additional air was grab-sampled at the site where the worker's exposure was estimated to be highest (estimated highest concentration or EHC). The samples were analyzed for 47 solvents designated by regulation, and solvent concentrations in each sample were summed up by use of additiveness formula. From the workplace concentrations at > or = 5 points, geometric mean and geometric standard deviations were calculated as the representative workplace concentration (RWC) and the indicator of variation in workplace concentration (VWC). Comparison between RWC and EHC in the total of 1010 SWPs showed that EHC was 1.2 (in large enterprises with>300 employees) to 1.7 times [in small to medium (SM) enterprises with enterprises and large enterprises, both RWC and EHC were significantly higher in SM enterprises than in large enterprises. Further comparison by types of solvent work showed that the difference was more marked in printing, surface coating and degreasing/cleaning/wiping SWPs, whereas it was less remarkable in painting SWPs and essentially nil in testing/research laboratories. In conclusion, the present observation as discussed in reference to previous publications suggests that RWC, EHC and the ratio of EHC/WRC varies substantially among different types of solvent work as well as enterprise size, and are typically higher in printing SWPs in SM enterprises.

  17. Spherical sampling

    CERN Document Server

    Freeden, Willi; Schreiner, Michael

    2018-01-01

    This book presents, in a consistent and unified overview, results and developments in the field of today´s spherical sampling, particularly arising in mathematical geosciences. Although the book often refers to original contributions, the authors made them accessible to (graduate) students and scientists not only from mathematics but also from geosciences and geoengineering. Building a library of topics in spherical sampling theory it shows how advances in this theory lead to new discoveries in mathematical, geodetic, geophysical as well as other scientific branches like neuro-medicine. A must-to-read for everybody working in the area of spherical sampling.

  18. Does developmental timing of exposure to child maltreatment predict memory performance in adulthood? Results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Busso, Daniel S; Raffeld, Miriam R; Smoller, Jordan W; Nelson, Charles A; Doyle, Alysa E; Luk, Gigi

    2016-01-01

    Although maltreatment is a known risk factor for multiple adverse outcomes across the lifespan, its effects on cognitive development, especially memory, are poorly understood. Using data from a large, nationally representative sample of young adults (Add Health), we examined the effects of physical and sexual abuse on working and short-term memory in adulthood. We examined the association between exposure to maltreatment as well as its timing of first onset after adjusting for covariates. Of our sample, 16.50% of respondents were exposed to physical abuse and 4.36% to sexual abuse by age 17. An analysis comparing unexposed respondents to those exposed to physical or sexual abuse did not yield any significant differences in adult memory performance. However, two developmental time periods emerged as important for shaping memory following exposure to sexual abuse, but in opposite ways. Relative to non-exposed respondents, those exposed to sexual abuse during early childhood (ages 3-5), had better number recall and those first exposed during adolescence (ages 14-17) had worse number recall. However, other variables, including socioeconomic status, played a larger role (than maltreatment) on working and short-term memory. We conclude that a simple examination of "exposed" versus "unexposed" respondents may obscure potentially important within-group differences that are revealed by examining the effects of age at onset to maltreatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  20. Large-scale genomic analysis shows association between homoplastic genetic variation in Mycobacterium tuberculosis genes and meningeal or pulmonary tuberculosis.

    NARCIS (Netherlands)

    Ruesen, Carolien; Chaidir, Lidya; van Laarhoven, Arjan; Dian, Sofiati; Ganiem, Ahmad Rizal; Nebenzahl-Guimaraes, Hanna; Huynen, Martijn A; Alisjahbana, Bachti; Dutilh, Bas E; van Crevel, Reinout

    2018-01-01

    Meningitis is the most severe manifestation of tuberculosis. It is largely unknown why some people develop pulmonary TB (PTB) and others TB meningitis (TBM); we examined if the genetic background of infecting M. tuberculosis strains may be relevant.

  1. Galaxy redshift surveys with sparse sampling

    International Nuclear Information System (INIS)

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-01-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V survey ∼ 10Gpc 3 ) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V survey , we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V survey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V survey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys

  2. Comparison of sample preparation procedures on metal(loid) fractionation patterns in lichens.

    Science.gov (United States)

    Kroukamp, E M; Godeto, T W; Forbes, P B C

    2017-08-13

    The effects of different sample preparation strategies and storage on metal(loid) fractionation trends in plant material is largely underresearched. In this study, a bulk sample of lichen Parmotrema austrosinense (Zahlbr.) Hale was analysed for its total extractable metal(loid) content by ICP-MS, and was determined to be adequately homogenous (sample were prepared utilising a range of sample preservation techniques and subjected to a modified sequential extraction procedure or to total metal extraction. Both experiments were repeated after 1-month storage at 4 °C. Cryogenic freezing gave the best reproducibility for total extractable elemental concentrations between months, indicating this to be the most suitable method of sample preparation in such studies. The combined extraction efficiencies were >82% for As, Cu, Mn, Pb, Sr and Zn but poor for other elements, where sample preparation strategies 'no sample preparation' and 'dried in a desiccator' had the best extraction recoveries. Cryogenic freezing procedures had a significantly (p sample cleaning and preservation when species fractionation patterns are of interest. This study also shows that the assumption that species stability can be ensured through cryopreservation and freeze drying techniques needs to be revisited.

  3. Alfven wave coupling in large tokamaks

    International Nuclear Information System (INIS)

    Borg, G.G.; Knight, A.J.; Lister, J.B.; Appert, K.; Vaclavik, J.

    1988-01-01

    Supplementary plasma heating by Alfven waves (AWH) has been extensively studied both theoretically and experimentally for small, low temperature plasmas. However, only a few studies of AWH have been performed for fusion plasmas. In this paper the cylindrical kinetic code ISMENE is used to address problems af AWH in a large tokamak. The results of calculations are presented which show that the antenna loading scales with frequency and vessel dimensions according to ideal MHD theory. A sample scaling of the experimental antenna loading measured in TCA to the loading predicted for a fusion plasma is presented. We discuss whether this loading leads to a realistic antenna design. The choice of a suitable antenna configuration, mode number and operating frequency is presented for NET parameters with a typical operating scenario. (author) 6 figs., 8 refs

  4. Radiocarbon variability of fatty acids in semi-urban aerosol samples

    International Nuclear Information System (INIS)

    Matsumoto, Kohei; Uchida, Masao; Kawamura, Kimitaka; Shibata, Yasuyuki; Morita, Masatoshi

    2004-01-01

    We analyzed radiocarbon and the stable carbon isotope ratio for individual monocarboxylic (fatty) acids in an aerosol sample (QFF 2138) and compared the results with data of the aerosol sample taken in another year. The fatty acid concentration distribution of aerosol sample QFF 2138 showed a bimodal pattern with maxima at C 16 and C 26 . Stable carbon isotope ratios of the fatty acids ranged from -30.8 per mille to -23.0 per mille which indicates the animal and/or marine algae origins for C 16 -C 19 fatty acids and mainly terrestrial C 3 plant origins for C >20 fatty acids. Δ 14 C values for fatty acids ranged from -89.7 per mille to +83.5 per mille. Compared with QFF1969, we found that the Δ 14 C values of fatty acids exhibited a wide diversity and Δ 14 C values for each fatty acid in QFF 2138 were largely different from those of QFF 1969

  5. Quality Control Samples for the Radiological Determination of Tritium in Urine Samples

    International Nuclear Information System (INIS)

    Ost'pezuk, P.; Froning, M.; Laumen, S.; Richert, I.; Hill, P.

    2004-01-01

    The radioactive decay product of tritium is a low energy beta that cannot penetrate the outer dead layer of human skin. Therefore , the main hazard associated with tritium is internal exposure. In addition, due to the relatively long half life and short biological half life, tritium must be ingested in large amounts to pose a significant health risk. On the other hand, the internal exposure should be kept as low as practical. For incorporation monitoring of professional radiation workers the quality control is of utmost importance. In the Research Centre Juelich GmbH (FZJ) a considerable fraction of monitoring by excretion analysis relates to the isotope Tritium. Usually an aliquot of an urine sample is mixed with a liquid scintillator and measured in a liquid scintillation counter. Quality control samples in the form of three kind of internal reference samples (blank, reference samples with low activity and reference sample with elevated activity) were prepared from a mixed, Tritium (free) urine samples. 1 ml of these samples were pipetted into a liquid scintillation vial. In the part of theses vials a known amounts of Tritium were added. All these samples were stored at 20 degrees. Based on long term use of all these reference samples it was possible to construct appropriate control charts with the upper and lower alarm limits. Daily use of these reference samples decrease significantly the risk for false results in original urine with no significant increase of the determination time. (Author) 2 refs

  6. Ultra-High-Throughput Sample Preparation System for Lymphocyte Immunophenotyping Point-of-Care Diagnostics.

    Science.gov (United States)

    Walsh, David I; Murthy, Shashi K; Russom, Aman

    2016-10-01

    Point-of-care (POC) microfluidic devices often lack the integration of common sample preparation steps, such as preconcentration, which can limit their utility in the field. In this technology brief, we describe a system that combines the necessary sample preparation methods to perform sample-to-result analysis of large-volume (20 mL) biopsy model samples with staining of captured cells. Our platform combines centrifugal-paper microfluidic filtration and an analysis system to process large, dilute biological samples. Utilizing commercialization-friendly manufacturing methods and materials, yielding a sample throughput of 20 mL/min, and allowing for on-chip staining and imaging bring together a practical, yet powerful approach to microfluidic diagnostics of large, dilute samples. © 2016 Society for Laboratory Automation and Screening.

  7. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  8. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  9. Topology of Large-Scale Structure by Galaxy Type: Hydrodynamic Simulations

    Science.gov (United States)

    Gott, J. Richard, III; Cen, Renyue; Ostriker, Jeremiah P.

    1996-07-01

    The topology of large-scale structure is studied as a function of galaxy type using the genus statistic. In hydrodynamical cosmological cold dark matter simulations, galaxies form on caustic surfaces (Zeldovich pancakes) and then slowly drain onto filaments and clusters. The earliest forming galaxies in the simulations (defined as "ellipticals") are thus seen at the present epoch preferentially in clusters (tending toward a meatball topology), while the latest forming galaxies (defined as "spirals") are seen currently in a spongelike topology. The topology is measured by the genus (number of "doughnut" holes minus number of isolated regions) of the smoothed density-contour surfaces. The measured genus curve for all galaxies as a function of density obeys approximately the theoretical curve expected for random- phase initial conditions, but the early-forming elliptical galaxies show a shift toward a meatball topology relative to the late-forming spirals. Simulations using standard biasing schemes fail to show such an effect. Large observational samples separated by galaxy type could be used to test for this effect.

  10. Analysis of IFR samples at ANL-E

    International Nuclear Information System (INIS)

    Bowers, D.L.; Sabau, C.S.

    1993-01-01

    The Analytical Chemistry Laboratory analyzes a variety of samples submitted by the different research groups within IFR. This talk describes the analytical work on samples generated by the Plutonium Electrorefiner, Large Scale Electrorefiner and Waste Treatment Studies. The majority of these samples contain Transuranics and necessitate facilities that safely contain these radioisotopes. Details such as: sample receiving, dissolution techniques, chemical separations, Instrumentation used, reporting of results are discussed. The Importance of Interactions between customer and analytical personnel Is also demonstrated

  11. Contact-free sheet resistance determination of large area graphene layers by an open dielectric loaded microwave cavity

    International Nuclear Information System (INIS)

    Shaforost, O.; Wang, K.; Adabi, M.; Guo, Z.; Hanham, S.; Klein, N.; Goniszewski, S.; Gallop, J.; Hao, L.

    2015-01-01

    A method for contact-free determination of the sheet resistance of large-area and arbitrary shaped wafers or sheets coated with graphene and other (semi) conducting ultrathin layers is described, which is based on an open dielectric loaded microwave cavity. The sample under test is exposed to the evanescent resonant field outside the cavity. A comparison with a closed cavity configuration revealed that radiation losses have no significant influence of the experimental results. Moreover, the microwave sheet resistance results show good agreement with the dc conductivity determined by four-probe van der Pauw measurements on a set of CVD samples transferred on quartz. As an example of a practical application, correlations between the sheet resistance and deposition conditions for CVD graphene transferred on quartz wafers are described. Our method has a high potential as measurement standard for contact-free sheet resistance measurement and mapping of large area graphene samples

  12. Psychological Predictors of Seeking Help from Mental Health Practitioners among a Large Sample of Polish Young Adults

    Directory of Open Access Journals (Sweden)

    Lidia Perenc

    2016-10-01

    Full Text Available Although the corresponding literature contains a substantial number of studies on the relationship between psychological factors and attitude towards seeking professional psychological help, the role of some determinants remains unexplored, especially among Polish young adults. The present study investigated diversity among a large cohort of Polish university students related to attitudes towards help-seeking and the regulative roles of gender, level of university education, health locus of control and sense of coherence. The total sample comprised 1706 participants who completed the following measures: Attitude Toward Seeking Professional Psychological Help Scale-SF, Multidimensional Health Locus of Control Scale, and Orientation to Life Questionnaire (SOC-29. They were recruited from various university faculties and courses by means of random selection. The findings revealed that, among socio-demographic variables, female gender moderately and graduate of university study strongly predict attitude towards seeking help. Internal locus of control and all domains of sense of coherence are significantly correlated with the scores related to the help-seeking attitude. Attitudes toward psychological help-seeking are significantly related to female gender, graduate university education, internal health locus of control and sense of coherence. Further research must be performed in Poland in order to validate these results in different age and social groups.

  13. Transport Coefficients from Large Deviation Functions

    OpenAIRE

    Gao, Chloe Ya; Limmer, David T.

    2017-01-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate th...

  14. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative

  15. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    International Nuclear Information System (INIS)

    Shine, E. P.; Poirier, M. R.

    2013-01-01

    statisticians used carefully thought out designs that systematically and economically provided plans for data collection from the DWPF process. Key shared features of the sampling designs used at DWPF and the Gy sampling methodology were the specification of a standard for sample representativeness, an investigation that produced data from the process to study the sampling function, and a decision framework used to assess whether the specification was met based on the data. Without going into detail with regard to the seven errors identified by Pierre Gy, as excellent summaries are readily available such as Pitard [1989] and Smith [2001], SRS engineers understood, for example, that samplers can be biased (Gy's extraction error), and developed plans to mitigate those biases. Experiments that compared installed samplers with more representative samples obtained directly from the tank may not have resulted in systematically partitioning sampling errors into the now well-known error categories of Gy, but did provide overall information on the suitability of sampling systems. Most of the designs in this report are related to the DWPF vessels, not the large SRS Tank Farm tanks. Samples from the DWPF Slurry Mix Evaporator (SME), which contains the feed to the DWPF melter, are characterized using standardized analytical methods with known uncertainty. The analytical error is combined with the established error from sampling and processing in DWPF to determine the melter feed composition. This composition is used with the known uncertainty of the models in the Product Composition Control System (PCCS) to ensure that the wasteform that is produced is comfortably within the acceptable processing and product performance region. Having the advantage of many years of processing that meets the waste glass product acceptance criteria, the DWPF process has provided a considerable amount of data about itself in addition to the data from many special studies. Demonstrating representative sampling

  16. Online Italian fandoms of American TV shows

    Directory of Open Access Journals (Sweden)

    Eleonora Benecchi

    2015-06-01

    Full Text Available The Internet has changed media fandom in two main ways: it helps fans connect with each other despite physical distance, leading to the formation of international fan communities; and it helps fans connect with the creators of the TV show, deepening the relationship between TV producers and international fandoms. To assess whether Italian fan communities active online are indeed part of transnational online communities and whether the Internet has actually altered their relationship with the creators of the original text they are devoted to, qualitative analysis and narrative interviews of 26 Italian fans of American TV shows were conducted to explore the fan-producer relationship. Results indicated that the online Italian fans surveyed preferred to stay local, rather than using geography-leveling online tools. Further, the sampled Italian fans' relationships with the show runners were mediated or even absent.

  17. Calculating p-values and their significances with the Energy Test for large datasets

    Science.gov (United States)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  18. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai

    2015-09-16

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  19. Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data

    KAUST Repository

    Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.

    2015-01-01

    DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.

  20. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    Science.gov (United States)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  1. Wealth Transfers Among Large Customers from Implementing Real-Time Retail Electricity Pricing

    OpenAIRE

    Borenstein, Severin

    2007-01-01

    Adoption of real-time electricity pricing — retail prices that vary hourly to reflect changing wholesale prices — removes existing cross-subsidies to those customers that consume disproportionately more when wholesale prices are highest. If their losses are substantial, these customers are likely to oppose RTP initiatives unless there is a supplemental program to offset their loss. Using data on a sample of 1142 large industrial and commercial customers in northern California, I show that RTP...

  2. Neutron multicounter detector for investigation of content and spatial distribution of fission materials in large volume samples

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Starosta, W.; Zoltowski, T.

    1998-01-01

    The experimental device is a neutron coincidence well counter. It can be applied for passive assay of fissile - especially for plutonium bearing - materials. It consist of a set of 3 He tubes placed inside a polyethylene moderator; outputs from the tubes, first processed by preamplifier/amplifier/discriminator circuits, are then analysed using neutron correlator connected with a PC, and correlation techniques implemented in software. Such a neutron counter allows for determination of plutonium mass ( 240 Pu effective mass) in nonmultiplying samples having fairly big volume (up to 0.14 m 3 ). For determination of neutron sources distribution inside the sample, the heuristic methods based on hierarchical cluster analysis are applied. As an input parameters, amplitudes and phases of two-dimensional Fourier transformation of the count profiles matrices for known point sources distributions and for the examined samples, are taken. Such matrices are collected by means of sample scanning by detection head. During clustering process, counts profiles for unknown samples fitted into dendrograms using the 'proximity' criterion of the examined sample profile to standard samples profiles. Distribution of neutron sources in an examined sample is then evaluated on the basis of comparison with standard sources distributions. (author)

  3. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  4. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    . We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  5. A propidium monoazide–quantitative PCR method for the detection and quantification of viable Enterococcus faecalis in large-volume samples of marine waters

    KAUST Repository

    Salam, Khaled W.; El-Fadel, Mutasem E.; Barbour, Elie K.; Saikaly, Pascal

    2014-01-01

    The development of rapid detection assays of cell viability is essential for monitoring the microbiological quality of water systems. Coupling propidium monoazide with quantitative PCR (PMA-qPCR) has been successfully applied in different studies for the detection and quantification of viable cells in small-volume samples (0.25-1.00 mL), but it has not been evaluated sufficiently in marine environments or in large-volume samples. In this study, we successfully integrated blue light-emitting diodes for photoactivating PMA and membrane filtration into the PMA-qPCR assay for the rapid detection and quantification of viable Enterococcus faecalis cells in 10-mL samples of marine waters. The assay was optimized in phosphate-buffered saline and seawater, reducing the qPCR signal of heat-killed E. faecalis cells by 4 log10 and 3 log10 units, respectively. Results suggest that high total dissolved solid concentration (32 g/L) in seawater can reduce PMA activity. Optimal PMA-qPCR standard curves with a 6-log dynamic range and detection limit of 102 cells/mL were generated for quantifying viable E. faecalis cells in marine waters. The developed assay was compared with the standard membrane filter (MF) method by quantifying viable E. faecalis cells in seawater samples exposed to solar radiation. The results of the developed PMA-qPCR assay did not match that of the standard MF method. This difference in the results reflects the different physiological states of E. faecalis cells in seawater. In conclusion, the developed assay is a rapid (∼5 h) method for the quantification of viable E. faecalis cells in marine recreational waters, which should be further improved and tested in different seawater settings. © 2014 Springer-Verlag Berlin Heidelberg.

  6. A propidium monoazide–quantitative PCR method for the detection and quantification of viable Enterococcus faecalis in large-volume samples of marine waters

    KAUST Repository

    Salam, Khaled W.

    2014-08-23

    The development of rapid detection assays of cell viability is essential for monitoring the microbiological quality of water systems. Coupling propidium monoazide with quantitative PCR (PMA-qPCR) has been successfully applied in different studies for the detection and quantification of viable cells in small-volume samples (0.25-1.00 mL), but it has not been evaluated sufficiently in marine environments or in large-volume samples. In this study, we successfully integrated blue light-emitting diodes for photoactivating PMA and membrane filtration into the PMA-qPCR assay for the rapid detection and quantification of viable Enterococcus faecalis cells in 10-mL samples of marine waters. The assay was optimized in phosphate-buffered saline and seawater, reducing the qPCR signal of heat-killed E. faecalis cells by 4 log10 and 3 log10 units, respectively. Results suggest that high total dissolved solid concentration (32 g/L) in seawater can reduce PMA activity. Optimal PMA-qPCR standard curves with a 6-log dynamic range and detection limit of 102 cells/mL were generated for quantifying viable E. faecalis cells in marine waters. The developed assay was compared with the standard membrane filter (MF) method by quantifying viable E. faecalis cells in seawater samples exposed to solar radiation. The results of the developed PMA-qPCR assay did not match that of the standard MF method. This difference in the results reflects the different physiological states of E. faecalis cells in seawater. In conclusion, the developed assay is a rapid (∼5 h) method for the quantification of viable E. faecalis cells in marine recreational waters, which should be further improved and tested in different seawater settings. © 2014 Springer-Verlag Berlin Heidelberg.

  7. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  8. Calibration samples for accelerator mass spectrometry

    International Nuclear Information System (INIS)

    Hershberger, R.L.; Flynn, D.S.; Gabbard, F.

    1981-01-01

    Radioactive samples with precisely known numbers of atoms are useful as calibration sources for lifetime measurements using accelerator mass spectrometry. Such samples can be obtained in two ways: either by measuring the production rate as the sample is created or by measuring the decay rate after the sample has been obtained. The latter method requires that a large sample be produced and that the decay constant be accurately known. The former method is a useful and independent alternative, especially when the decay constant is not well known. The facilities at the University of Kentucky for precision measurements of total neutron production cross sections offer a source of such calibration samples. The possibilities, while quite extensive, would be limited to the proton rich side of the line of stability because of the use of (p,n) and (α,n) reactions for sample production

  9. Illumina MiSeq Phylogenetic Amplicon Sequencing Shows a Large Reduction of an Uncharacterised Succinivibrionaceae and an Increase of the Methanobrevibacter gottschalkii Clade in Feed Restricted Cattle.

    Directory of Open Access Journals (Sweden)

    Matthew Sean McCabe

    Full Text Available Periodic feed restriction is used in cattle production to reduce feed costs. When normal feed levels are resumed, cattle catch up to a normal weight by an acceleration of normal growth rate, known as compensatory growth, which is not yet fully understood. Illumina Miseq Phylogenetic marker amplicon sequencing of DNA extracted from rumen contents of 55 bulls showed that restriction of feed (70% concentrate, 30% grass silage for 125 days, to levels that caused a 60% reduction of growth rate, resulted in a large increase of relative abundance of Methanobrevibacter gottschalkii clade (designated as OTU-M7, and a large reduction of an uncharacterised Succinivibrionaceae species (designated as OTU-S3004. There was a strong negative Spearman correlation (ρ = -0.72, P = <1x10(-20 between relative abundances of OTU-3004 and OTU-M7 in the liquid rumen fraction. There was also a significant increase in acetate:propionate ratio (A:P in feed restricted animals that showed a negative Spearman correlation (ρ = -0.69, P = <1x10(-20 with the relative abundance of OTU-S3004 in the rumen liquid fraction but not the solid fraction, and a strong positive Spearman correlation with OTU-M7 in the rumen liquid (ρ = 0.74, P = <1x10(-20 and solid (ρ = 0.69, P = <1x10(-20 fractions. Reduced A:P ratios in the rumen are associated with increased feed efficiency and reduced production of methane which has a global warming potential (GWP 100 years of 28. Succinivibrionaceae growth in the rumen was previously suggested to reduce methane emissions as some members of this family utilise hydrogen, which is also utilised by methanogens for methanogenesis, to generate succinate which is converted to propionate. Relative abundance of OTU-S3004 showed a positive Spearman correlation with propionate (ρ = 0.41, P = <0.01 but not acetate in the liquid rumen fraction.

  10. Sampling free energy surfaces as slices by combining umbrella sampling and metadynamics.

    Science.gov (United States)

    Awasthi, Shalini; Kapil, Venkat; Nair, Nisanth N

    2016-06-15

    Metadynamics (MTD) is a very powerful technique to sample high-dimensional free energy landscapes, and due to its self-guiding property, the method has been successful in studying complex reactions and conformational changes. MTD sampling is based on filling the free energy basins by biasing potentials and thus for cases with flat, broad, and unbound free energy wells, the computational time to sample them becomes very large. To alleviate this problem, we combine the standard Umbrella Sampling (US) technique with MTD to sample orthogonal collective variables (CVs) in a simultaneous way. Within this scheme, we construct the equilibrium distribution of CVs from biased distributions obtained from independent MTD simulations with umbrella potentials. Reweighting is carried out by a procedure that combines US reweighting and Tiwary-Parrinello MTD reweighting within the Weighted Histogram Analysis Method (WHAM). The approach is ideal for a controlled sampling of a CV in a MTD simulation, making it computationally efficient in sampling flat, broad, and unbound free energy surfaces. This technique also allows for a distributed sampling of a high-dimensional free energy surface, further increasing the computational efficiency in sampling. We demonstrate the application of this technique in sampling high-dimensional surface for various chemical reactions using ab initio and QM/MM hybrid molecular dynamics simulations. Further, to carry out MTD bias reweighting for computing forward reaction barriers in ab initio or QM/MM simulations, we propose a computationally affordable approach that does not require recrossing trajectories. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Prevalence of suicidal behaviour and associated factors in a large sample of Chinese adolescents.

    Science.gov (United States)

    Liu, X C; Chen, H; Liu, Z Z; Wang, J Y; Jia, C X

    2017-10-12

    Suicidal behaviour is prevalent among adolescents and is a significant predictor of future suicide attempts (SAs) and suicide death. Data on the prevalence and epidemiological characteristics of suicidal behaviour in Chinese adolescents are limited. This study was aimed to examine the prevalence, characteristics and risk factors of suicidal behaviour, including suicidal thought (ST), suicide plan (SP) and SA, in a large sample of Chinese adolescents. This report represents the first wave data of an ongoing longitudinal study, Shandong Adolescent Behavior and Health Cohort. Participants included 11 831 adolescent students from three counties of Shandong, China. The mean age of participants was 15.0 (s.d. = 1.5) and 51% were boys. In November-December 2015, participants completed a structured adolescent health questionnaire, including ST, SP and SA, characteristics of most recent SA, demographics, substance use, hopelessness, impulsivity and internalising and externalising behavioural problems. The lifetime and last-year prevalence rates were 17.6 and 10.7% for ST in males, 23.5 and 14.7% for ST in females, 8.9 and 2.9% for SP in males, 10.7 and 3.8% for SP in females, 3.4 and 1.3% for SA in males, and 4.6 and 1.8% for SA in females, respectively. The mean age of first SA was 12-13 years. Stabbing/cutting was the most common method to attempt suicide. Approximately 24% of male attempters and 16% of female attempters were medically treated. More than 70% of attempters had no preparatory action. Female gender, smoking, drinking, internalising and externalising problems, hopelessness, suicidal history of friends and acquaintances, poor family economic status and poor parental relationship were all significantly associated with increased risk of suicidal behaviour. Suicidal behaviour in Chinese adolescents is prevalent but less than that previously reported in Western peers. While females are more likely to attempt suicide, males are more likely to use lethal methods

  12. Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms

    International Nuclear Information System (INIS)

    Shangguan Danhua; Bao Jingdong

    2010-01-01

    We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.

  13. Patient-reported causes of heart failure in a large European sample

    DEFF Research Database (Denmark)

    Timmermans, Ivy; Denollet, Johan; Pedersen, Susanne S.

    2018-01-01

    ), psychosocial (35%, mainly (work-related) stress), and natural causes (32%, mainly heredity). There were socio-demographic, clinical and psychological group differences between the various categories, and large discrepancies between prevalence of physical risk factors according to medical records and patient...... distress (OR = 1.54, 95% CI = 0.94–2.51, p = 0.09), and behavioral causes and a less threatening view of heart failure (OR = 0.64, 95% CI = 0.40–1.01, p = 0.06). Conclusion: European patients most frequently reported comorbidities, smoking, stress, and heredity as heart failure causes, but their causal......Background: Patients diagnosed with chronic diseases develop perceptions about their disease and its causes, which may influence health behavior and emotional well-being. This is the first study to examine patient-reported causes and their correlates in patients with heart failure. Methods...

  14. Estimation of sampling error uncertainties in observed surface air temperature change in China

    Science.gov (United States)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  15. Neutron activation analysis of chemical impurities in manipulated samples of omeprazole

    International Nuclear Information System (INIS)

    Sepe, Fernanda Peixoto; Leal, Alexandre Soares; Gomes, Tatiana Cristina Bomfim; Menezes, Maria Angela de Barros Correia; Silva, Maria Aparecida

    2011-01-01

    In this work, samples of Omeprazole (C 17 H 19 N 3 O 3 S), a largely used drug in the treatment of dyspepsia and peptic ulcer, were acquired from five different pharmacies of manipulation - or retail pharmacies which prepare personalized drugs under medical recommendation - in Belo Horizonte/Brazil and investigated using the k 0 - Neutron Activation Analysis (NAA). The preliminary results showed the presence of elements not foreseen in the original formula. It confirms the potential risk offered by medicines without suitable inspection. (author)

  16. Small-sample-worth perturbation methods

    International Nuclear Information System (INIS)

    1985-01-01

    It has been assumed that the perturbed region, R/sub p/, is large enough so that: (1) even without a great deal of biasing there is a substantial probability that an average source-neutron will enter it; and (2) once having entered, the neutron is likely to make several collisions in R/sub p/ during its lifetime. Unfortunately neither assumption is valid for the typical configurations one encounters in small-sample-worth experiments. In such experiments one measures the reactivity change which is induced when a very small void in a critical assembly is filled with a sample of some test-material. Only a minute fraction of the fission-source neutrons ever gets into the sample and, of those neutrons that do, most emerge uncollided. Monte Carlo small-sample perturbations computations are described

  17. Variability in metagenomic samples from the Puget Sound: Relationship to temporal and anthropogenic impacts.

    Directory of Open Access Journals (Sweden)

    James C Wallace

    Full Text Available Whole-metagenome sequencing (WMS has emerged as a powerful tool to assess potential public health risks in marine environments by measuring changes in microbial community structure and function in uncultured bacteria. In addition to monitoring public health risks such as antibiotic resistance determinants, it is essential to measure predictors of microbial variation in order to identify natural versus anthropogenic factors as well as to evaluate reproducibility of metagenomic measurements.This study expands our previous metagenomic characterization of Puget Sound by sampling new nearshore environments including the Duwamish River, an EPA superfund site, and the Hood Canal, an area characterized by highly variable oxygen levels. We also resampled a wastewater treatment plant, nearshore and open ocean sites introducing a longitudinal component measuring seasonal and locational variations and establishing metagenomics sampling reproducibility. Microbial composition from samples collected in the open sound were highly similar within the same season and location across different years, while nearshore samples revealed multi-fold seasonal variation in microbial composition and diversity. Comparisons with recently sequenced predominant marine bacterial genomes helped provide much greater species level taxonomic detail compared to our previous study. Antibiotic resistance determinants and pollution and detoxification indicators largely grouped by location showing minor seasonal differences. Metal resistance, oxidative stress and detoxification systems showed no increase in samples proximal to an EPA superfund site indicating a lack of ecosystem adaptation to anthropogenic impacts. Taxonomic analysis of common sewage influent families showed a surprising similarity between wastewater treatment plant and open sound samples suggesting a low-level but pervasive sewage influent signature in Puget Sound surface waters. Our study shows reproducibility of

  18. Cross-cultural measurement invariance of the General Health Questionnaire-12 in a German and a Colombian population sample.

    Science.gov (United States)

    Romppel, Matthias; Hinz, Andreas; Finck, Carolyn; Young, Jeremy; Brähler, Elmar; Glaesmer, Heide

    2017-12-01

    While the General Health Questionnaire, 12-item version (GHQ-12) has been widely used in cross-cultural comparisons, rigorous tests of the measurement equivalence of different language versions are still lacking. Thus, our study aims at investigating configural, metric and scalar invariance across the German and the Spanish version of the GHQ-12 in two population samples. The GHQ-12 was applied in two large-scale population-based samples in Germany (N = 1,977) and Colombia (N = 1,500). To investigate measurement equivalence, confirmatory factor analyses were conducted in both samples. In the German sample mean GHQ-12 total scores were higher than in the Colombian sample. A one-factor model including response bias on the negatively worded items showed superior fit in the German and the Colombian sample; thus both versions of the GHQ-12 showed configural invariance. Factor loadings and intercepts were not equal across both samples; thus GHQ-12 showed no metric and scalar invariance. As both versions of the GHQ-12 did not show measurement equivalence, it is not recommendable to compare both measures and to conclude that mental distress is higher in the German sample, although we do not know if the differences are attributable to measurement problems or represent a real difference in mental distress. The study underlines the importance of measurement equivalence in cross-cultural comparisons. Copyright © 2017 John Wiley & Sons, Ltd.

  19. An improved ashing procedure for biologic sample

    Energy Technology Data Exchange (ETDEWEB)

    Zongmei, Wu [Zhejiang Province Enviromental Radiation Monitoring Centre (China)

    1992-07-01

    The classical ashing procedure in muffle was modified for biologic samples. In the modified procedure the door of muffle was open in the duration of ashing process, the ashing was accelerated and the ashing product quality was comparable to that the classical procedure. The modified procedure is suitable for ashing biologic samples in large batches.

  20. An improved ashing procedure for biologic sample

    International Nuclear Information System (INIS)

    Wu Zongmei

    1992-01-01

    The classical ashing procedure in muffle was modified for biologic samples. In the modified procedure the door of muffle was open in the duration of ashing process, the ashing was accelerated and the ashing product quality was comparable to that the classical procedure. The modified procedure is suitable for ashing biologic samples in large batches

  1. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    , synchrony is in fact a much more widespread process, with bird populations across wide areas showing similar trends and fluctuations as a result of common climatic and environmental factors (Paradis et al., 2000. Dispersal may also play an important role in such synchrony but its role is less well understood. Nigel Yoccoz and Rolf Ims (Yoccoz & Ims, 2004 show how synchrony can be investigated using data at three spatial scales taken from their field studies of the population dynamics of small mammals in North Norway. Small mammal abundance was estimated from trapping data using closed population models and also from total numbers of individuals captured. They use simulated data to show that synchrony, measured by the correlation coefficients between time series, was biased low by up to 30% when sampling variation was ignored. Appropriate analysis of such data will require simultaneous modelling of process and sampling variation, for example through the use of state–space models (Buckland et al., 2004. This view links back nicely to the approaches proposed by Andy Royle (Royle, 2004. Cycles in the abundance of small mammals have major affects on the demography of their predators, as is shown in the paper by Pertti Saurola and Charles Francis (Saurola & Francis, 2004. They report on the design and results of large–scale, long–term studies of owl populations by a network of amateur bird ringers in Finland. They show that breeding success varies with the stage of the microtine cycle. They also show how their data can be used to estimate dispersal over large spatial scales and illustrate the importance of correcting for uneven spatial variation in sampling effort. Further results from this study are reported in a companion paper within the population dynamics session (Francis & Saurola, 2004. Multi–species analyses of population dynamics are developed further in the paper by Romain Julliard (Julliard, 2004. He combines counts from the French Breeding Bird Survey with

  2. Method validation to determine total alpha beta emitters in water samples using LSC

    International Nuclear Information System (INIS)

    Al-Masri, M. S.; Nashawati, A.; Al-akel, B.; Saaid, S.

    2006-06-01

    In this work a method was validated to determine gross alpha and beta emitters in water samples using liquid scintillation counter. 200 ml of water from each sample were evaporated to 20 ml and 8 ml of them were mixed with 12 ml of the suitable cocktail to be measured by liquid scintillation counter Wallac Winspectral 1414. The lower detection limit by this method (LDL) was 0.33 DPM for total alpha emitters and 1.3 DPM for total beta emitters. and the reproducibility limit was (± 2.32 DPM) and (±1.41 DPM) for total alpha and beta emitters respectively, and the repeatability limit was (±2.19 DPM) and (±1.11 DPM) for total alpha and beta emitters respectively. The method is easy and fast because of the simple preparation steps and the large number of samples that can be measured at the same time. In addition, many real samples and standard samples were analyzed by the method and showed accurate results so it was concluded that the method can be used with various water samples. (author)

  3. Nullspace Sampling with Holonomic Constraints Reveals Molecular Mechanisms of Protein Gαs.

    Directory of Open Access Journals (Sweden)

    Dimitar V Pachov

    2015-07-01

    Full Text Available Proteins perform their function or interact with partners by exchanging between conformational substates on a wide range of spatiotemporal scales. Structurally characterizing these exchanges is challenging, both experimentally and computationally. Large, diffusional motions are often on timescales that are difficult to access with molecular dynamics simulations, especially for large proteins and their complexes. The low frequency modes of normal mode analysis (NMA report on molecular fluctuations associated with biological activity. However, NMA is limited to a second order expansion about a minimum of the potential energy function, which limits opportunities to observe diffusional motions. By contrast, kino-geometric conformational sampling (KGS permits large perturbations while maintaining the exact geometry of explicit conformational constraints, such as hydrogen bonds. Here, we extend KGS and show that a conformational ensemble of the α subunit Gαs of heterotrimeric stimulatory protein Gs exhibits structural features implicated in its activation pathway. Activation of protein Gs by G protein-coupled receptors (GPCRs is associated with GDP release and large conformational changes of its α-helical domain. Our method reveals a coupled α-helical domain opening motion while, simultaneously, Gαs helix α5 samples an activated conformation. These motions are moderated in the activated state. The motion centers on a dynamic hub near the nucleotide-binding site of Gαs, and radiates to helix α4. We find that comparative NMA-based ensembles underestimate the amplitudes of the motion. Additionally, the ensembles fall short in predicting the accepted direction of the full activation pathway. Taken together, our findings suggest that nullspace sampling with explicit, holonomic constraints yields ensembles that illuminate molecular mechanisms involved in GDP release and protein Gs activation, and further establish conformational coupling between key

  4. Feasibility of self-sampled dried blood spot and saliva samples sent by mail in a population-based study.

    Science.gov (United States)

    Sakhi, Amrit Kaur; Bastani, Nasser Ezzatkhah; Ellingjord-Dale, Merete; Gundersen, Thomas Erik; Blomhoff, Rune; Ursin, Giske

    2015-04-11

    In large epidemiological studies it is often challenging to obtain biological samples. Self-sampling by study participants using dried blood spots (DBS) technique has been suggested to overcome this challenge. DBS is a type of biosampling where blood samples are obtained by a finger-prick lancet, blotted and dried on filter paper. However, the feasibility and efficacy of collecting DBS samples from study participants in large-scale epidemiological studies is not known. The aim of the present study was to test the feasibility and response rate of collecting self-sampled DBS and saliva samples in a population-based study of women above 50 years of age. We determined response proportions, number of phone calls to the study center with questions about sampling, and quality of the DBS. We recruited women through a study conducted within the Norwegian Breast Cancer Screening Program. Invitations, instructions and materials were sent to 4,597 women. The data collection took place over a 3 month period in the spring of 2009. Response proportions for the collection of DBS and saliva samples were 71.0% (3,263) and 70.9% (3,258), respectively. We received 312 phone calls (7% of the 4,597 women) with questions regarding sampling. Of the 3,263 individuals that returned DBS cards, 3,038 (93.1%) had been packaged and shipped according to instructions. A total of 3,032 DBS samples were sufficient for at least one biomarker analysis (i.e. 92.9% of DBS samples received by the laboratory). 2,418 (74.1%) of the DBS cards received by the laboratory were filled with blood according to the instructions (i.e. 10 completely filled spots with up to 7 punches per spot for up to 70 separate analyses). To assess the quality of the samples, we selected and measured two biomarkers (carotenoids and vitamin D). The biomarker levels were consistent with previous reports. Collecting self-sampled DBS and saliva samples through the postal services provides a low cost, effective and feasible

  5. Test in a beam of large-area Micromegas chambers for sampling calorimetry

    CERN Document Server

    Adloff, C.; Dalmaz, A.; Drancourt, C.; Gaglione, R.; Geffroy, N.; Jacquemier, J.; Karyotakis, Y.; Koletsou, I.; Peltier, F.; Samarati, J.; Vouters, G.

    2014-06-11

    Application of Micromegas for sampling calorimetry puts specific constraints on the design and performance of this gaseous detector. In particular, uniform and linear response, low noise and stability against high ionisation density deposits are prerequisites to achieving good energy resolution. A Micromegas-based hadronic calorimeter was proposed for an application at a future linear collider experiment and three technologically advanced prototypes of 1$\\times$1 m$^{2}$ were constructed. Their merits relative to the above-mentioned criteria are discussed on the basis of measurements performed at the CERN SPS test-beam facility.

  6. Chorionic villus sampling and amniocentesis.

    Science.gov (United States)

    Brambati, Bruno; Tului, Lucia

    2005-04-01

    The advantages and disadvantages of common invasive methods for prenatal diagnosis are presented in light of new investigations. Several aspects of first-trimester chorionic villus sampling and mid-trimester amniocentesis remain controversial, especially fetal loss rate, feto-maternal complications, and the extension of both sampling methods to less traditional gestational ages (early amniocentesis, late chorionic villus sampling), all of which complicate genetic counseling. A recent randomized trial involving early amniocentesis and late chorionic villus sampling has confirmed previous studies, leading to the unquestionable conclusion that transabdominal chorionic villus sampling is safer. The old dispute over whether limb reduction defects are caused by chorionic villus sampling gains new vigor, with a paper suggesting that this technique has distinctive teratogenic effects. The large experience involving maternal and fetal complications following mid-trimester amniocentesis allows a better estimate of risk for comparison with chorionic villus sampling. Transabdominal chorionic villus sampling, which appears to be the gold standard sampling method for genetic investigations between 10 and 15 completed weeks, permits rapid diagnosis in high-risk cases detected by first-trimester screening of aneuploidies. Sampling efficiency and karyotyping reliability are as high as in mid-trimester amniocentesis with fewer complications, provided the operator has the required training, skill and experience.

  7. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  8. Increasing a large petrochemical company efficiency by improvement of decision making process

    OpenAIRE

    Kirin Snežana D.; Nešić Lela G.

    2010-01-01

    The paper shows the results of a research conducted in a large petrochemical company, in a state under transition, with the aim to "shed light" on the decision making process from the aspect of personal characteristics of the employees, in order to use the results to improve decision making process and increase company efficiency. The research was conducted by a survey, i.e. by filling out a questionnaire specially made for this purpose, in real conditions, during working hours. The sample of...

  9. Large-scale structure after COBE: Peculiar velocities and correlations of cold dark matter halos

    Science.gov (United States)

    Zurek, Wojciech H.; Quinn, Peter J.; Salmon, John K.; Warren, Michael S.

    1994-01-01

    Large N-body simulations on parallel supercomputers allow one to simultaneously investigate large-scale structure and the formation of galactic halos with unprecedented resolution. Our study shows that the masses as well as the spatial distribution of halos on scales of tens of megaparsecs in a cold dark matter (CDM) universe with the spectrum normalized to the anisotropies detected by Cosmic Background Explorer (COBE) is compatible with the observations. We also show that the average value of the relative pairwise velocity dispersion sigma(sub v) - used as a principal argument against COBE-normalized CDM models-is significantly lower for halos than for individual particles. When the observational methods of extracting sigma(sub v) are applied to the redshift catalogs obtained from the numerical experiments, estimates differ significantly between different observation-sized samples and overlap observational estimates obtained following the same procedure.

  10. Zirconia coated stir bar sorptive extraction combined with large volume sample stacking capillary electrophoresis-indirect ultraviolet detection for the determination of chemical warfare agent degradation products in water samples.

    Science.gov (United States)

    Li, Pingjing; Hu, Bin; Li, Xiaoyong

    2012-07-20

    In this study, a sensitive, selective and reliable analytical method by combining zirconia (ZrO₂) coated stir bar sorptive extraction (SBSE) with large volume sample stacking capillary electrophoresis-indirect ultraviolet (LVSS-CE/indirect UV) was developed for the direct analysis of chemical warfare agent degradation products of alkyl alkylphosphonic acids (AAPAs) (including ethyl methylphosphonic acid (EMPA) and pinacolyl methylphosphonate (PMPA)) and methylphosphonic acid (MPA) in environmental waters. ZrO₂ coated stir bar was prepared by adhering nanometer-sized ZrO₂ particles onto the surface of stir bar with commercial PDMS sol as adhesion agent. Due to the high affinity of ZrO₂ to the electronegative phosphonate group, ZrO₂ coated stir bars could selectively extract the strongly polar AAPAs and MPA. After systematically optimizing the extraction conditions of ZrO₂-SBSE, the analytical performance of ZrO₂-SBSE-CE/indirect UV and ZrO₂-SBSE-LVSS-CE/indirect UV was assessed. The limits of detection (LODs, at a signal-to-noise ratio of 3) obtained by ZrO₂-SBSE-CE/indirect UV were 13.4-15.9 μg/L for PMPA, EMPA and MPA. The relative standard deviations (RSDs, n=7, c=200 μg/L) of the corrected peak area for the target analytes were in the range of 6.4-8.8%. Enhancement factors (EFs) in terms of LODs were found to be from 112- to 145-fold. By combining ZrO₂ coating SBSE with LVSS as a dual preconcentration strategy, the EFs were magnified up to 1583-fold, and the LODs of ZrO₂-SBSE-LVSS-CE/indirect UV were 1.4, 1.2 and 3.1 μg/L for PMPA, EMPA, and MPA, respectively. The RSDs (n=7, c=20 μg/L) were found to be in the range of 9.0-11.8%. The developed ZrO₂-SBSE-LVSS-CE/indirect UV method has been successfully applied to the analysis of PMPA, EMPA, and MPA in different environmental water samples, and the recoveries for the spiked water samples were found to be in the range of 93.8-105.3%. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Sample preparation techniques of biological material for isotope analysis

    International Nuclear Information System (INIS)

    Axmann, H.; Sebastianelli, A.; Arrillaga, J.L.

    1990-01-01

    Sample preparation is an essential step in all isotope-aided experiments but often it is not given enough attention. The methods of sample preparation are very important to obtain reliable and precise analytical data and for further interpretation of results. The size of a sample required for chemical analysis is usually very small (10mg-1500mg). On the other hand the amount of harvested plant material from plots in a field experiment is often bulky (several kilograms) and the entire sample is too large for processing. In addition, while approaching maturity many crops show not only differences in physical consistency but also a non-uniformity in 15 N content among plant parts, requiring a plant fractionation or separation into parts (vegetative and reproductive) e.g. shoots and spikes, in case of small grain cereals, shoots and pods in case of grain legumes and tops and roots or beets (including crown) in case of sugar beet, etc. In any case the ultimate goal of these procedures is to obtain representative subsample harvested from greenhouse or field experiments for chemical analysis. Before harvesting an isotopic-aided experiment the method of sampling has to be selected. It should be based on the type of information required in relation to the objectives of the research and the availability of resources (staff, sample preparation equipment, analytical facilities, chemicals and supplies, etc.). 10 refs, 3 figs, 3 tabs

  12. Comparing two basic subtypes in OCD across three large community samples: a pure compulsive versus a mixed obsessive-compulsive subtype.

    Science.gov (United States)

    Rodgers, Stephanie; Ajdacic-Gross, Vladeta; Kawohl, Wolfram; Müller, Mario; Rössler, Wulf; Hengartner, Michael P; Castelao, Enrique; Vandeleur, Caroline; Angst, Jules; Preisig, Martin

    2015-12-01

    Due to its heterogeneous phenomenology, obsessive-compulsive disorder (OCD) has been subtyped. However, these subtypes are not mutually exclusive. This study presents an alternative subtyping approach by deriving non-overlapping OCD subtypes. A pure compulsive and a mixed obsessive-compulsive subtype (including subjects manifesting obsessions with/without compulsions) were analyzed with respect to a broad pattern of psychosocial risk factors and comorbid syndromes/diagnoses in three representative Swiss community samples: the Zurich Study (n = 591), the ZInEP sample (n = 1500), and the PsyCoLaus sample (n = 3720). A selection of comorbidities was examined in a pooled database. Odds ratios were derived from logistic regressions and, in the analysis of pooled data, multilevel models. The pure compulsive subtype showed a lower age of onset and was characterized by few associations with psychosocial risk factors. The higher social popularity of the pure compulsive subjects and their families was remarkable. Comorbidities within the pure compulsive subtype were mainly restricted to phobias. In contrast, the mixed obsessive-compulsive subtype had a higher prevalence and was associated with various childhood adversities, more familial burden, and numerous comorbid disorders, including disorders characterized by high impulsivity. The current comparison study across three representative community surveys presented two basic, distinct OCD subtypes associated with differing psychosocial impairment. Such highly specific subtypes offer the opportunity to learn about pathophysiological mechanisms specifically involved in OCD.

  13. The Kinematics of the Permitted C ii λ 6578 Line in a Large Sample of Planetary Nebulae

    Energy Technology Data Exchange (ETDEWEB)

    Richer, Michael G.; Suárez, Genaro; López, José Alberto; García Díaz, María Teresa, E-mail: richer@astrosen.unam.mx, E-mail: gsuarez@astro.unam.mx, E-mail: jal@astrosen.unam.mx, E-mail: tere@astro.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ensenada, Baja California (Mexico)

    2017-03-01

    We present spectroscopic observations of the C ii λ 6578 permitted line for 83 lines of sight in 76 planetary nebulae at high spectral resolution, most of them obtained with the Manchester Echelle Spectrograph on the 2.1 m telescope at the Observatorio Astronómico Nacional on the Sierra San Pedro Mártir. We study the kinematics of the C ii λ 6578 permitted line with respect to other permitted and collisionally excited lines. Statistically, we find that the kinematics of the C ii λ 6578 line are not those expected if this line arises from the recombination of C{sup 2+} ions or the fluorescence of C{sup +} ions in ionization equilibrium in a chemically homogeneous nebular plasma, but instead its kinematics are those appropriate for a volume more internal than expected. The planetary nebulae in this sample have well-defined morphology and are restricted to a limited range in H α line widths (no large values) compared to their counterparts in the Milky Way bulge; both these features could be interpreted as the result of young nebular shells, an inference that is also supported by nebular modeling. Concerning the long-standing discrepancy between chemical abundances inferred from permitted and collisionally excited emission lines in photoionized nebulae, our results imply that multiple plasma components occur commonly in planetary nebulae.

  14. The relationship between the Five-Factor Model personality traits and peptic ulcer disease in a large population-based adult sample.

    Science.gov (United States)

    Realo, Anu; Teras, Andero; Kööts-Ausmees, Liisi; Esko, Tõnu; Metspalu, Andres; Allik, Jüri

    2015-12-01

    The current study examined the relationship between the Five-Factor Model personality traits and physician-confirmed peptic ulcer disease (PUD) diagnosis in a large population-based adult sample, controlling for the relevant behavioral and sociodemographic factors. Personality traits were assessed by participants themselves and by knowledgeable informants using the NEO Personality Inventory-3 (NEO PI-3). When controlling for age, sex, education, and cigarette smoking, only one of the five NEO PI-3 domain scales - higher Neuroticism - and two facet scales - lower A1: Trust and higher C1: Competence - made a small, yet significant contribution (p personality traits that are associated with the diagnosis of PUD at a particular point in time. Further prospective studies with a longitudinal design and multiple assessments would be needed to fully understand if the FFM personality traits serve as risk factors for the development of PUD. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  15. On the phenomenon of large photoluminescence red shift in GaN nanoparticles

    KAUST Repository

    Ben Slimane, Ahmed

    2013-07-01

    We report on the observation of broad photoluminescence wavelength tunability from n-type gallium nitride nanoparticles (GaN NPs) fabricated using the ultraviolet metal-assisted electroless etching method. Transmission and scanning electron microscopy measurements performed on the nanoparticles revealed large size dispersion ranging from 10 to 100 nm. Nanoparticles with broad tunable emission wavelength from 362 to 440 nm have been achieved by exciting the samples using the excitation power-dependent method. We attribute this large wavelength tunability to the localized potential fluctuations present within the GaN matrix and to vacancy-related surface states. Our results show that GaN NPs fabricated using this technique are promising for tunable-color-temperature white light-emitting diode applications. © 2013 Slimane et al.; licensee Springer.

  16. On the phenomenon of large photoluminescence red shift in GaN nanoparticles

    KAUST Repository

    Ben Slimane, Ahmed; Anjum, Dalaver H.; Elafandy, Rami T.; Najar, Adel; Ng, Tien Khee; San Roman Alerigi, Damian; Ooi, Boon S.

    2013-01-01

    We report on the observation of broad photoluminescence wavelength tunability from n-type gallium nitride nanoparticles (GaN NPs) fabricated using the ultraviolet metal-assisted electroless etching method. Transmission and scanning electron microscopy measurements performed on the nanoparticles revealed large size dispersion ranging from 10 to 100 nm. Nanoparticles with broad tunable emission wavelength from 362 to 440 nm have been achieved by exciting the samples using the excitation power-dependent method. We attribute this large wavelength tunability to the localized potential fluctuations present within the GaN matrix and to vacancy-related surface states. Our results show that GaN NPs fabricated using this technique are promising for tunable-color-temperature white light-emitting diode applications. © 2013 Slimane et al.; licensee Springer.

  17. Synthesis and sintering Ni-Zn ferrite obtained for combustion reaction in large scale

    International Nuclear Information System (INIS)

    Vieira, D.A.; Diniz, V.C.S.; Costa, A.C.F.M.; Cornejo, D.R.; Kiminami, R.H.G.A.

    2014-01-01

    This research proposes to evaluate the magnetic properties of ferrite Ni-Zn synthesized by combustion reaction on a large scale and sintered at 1250 deg C in resistive furnace. The sample was characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), and magnetic measurements. The results show that the synthesized product in large scale resulted in soft magnetic material with saturation magnetization of 40 emu·g"-"1 and coercivity of 0.080 kOe, after sintering it was observed an increase to 68 emu·g"-"1 in the magnetization and a reduction to 0.016 kOe in coercivity, indicating that the obtained material has promising characteristics for applications in electro-electronic devices. (author)

  18. The defense-responsive genes showing enhanced and repressed expression after pathogen infection in rice (Oryza sativa L.)

    Institute of Scientific and Technical Information of China (English)

    ZHOU; Bin(周斌); PENG; Kaiman(彭开蔓); CHU; Zhaohui(储昭晖); WANG; Shiping(王石平); ZHANG; Qifa(张启发)

    2002-01-01

    Despite large numbers of studies about defense response, processes involved in the resistance of plants to incompatible pathogens are still largely uncharacterized. The objective of this study was to identify genes involved in defense response by cDNA array analysis and to gain knowledge about the functions of the genes involved in defense response. Approximately 20000 rice cDNA clones were arrayed on nylon filters. RNA samples isolated from different rice lines after infection with incompatible strains or isolates of Xanthomonas oryzae pv. oryzae or Pyricularia grisea, respectively, were used to synthesize cDNA as probes for screening the cDNA arrays. A total of 100 differentially expressed unique sequences were identified from 5 pathogen-host combinations. Fifty-three sequences were detected as showing enhanced expression and 47 sequences were detected as showing repressed expression after pathogen infection. Sequence analysis revealed that most of the 100 sequences had various degrees of homology with genes in databases which encode or putatively encode transcription regulating proteins, translation regulating proteins, transport proteins, kinases, metabolic enzymes, and proteins involved in other functions. Most of the genes have not been previously reported as being involved in the disease resistance response in rice. The results from cDNA arrays, reverse transcription-polymerase chain reaction, and RNA gel blot analysis suggest that activation or repression of most of these genes might occur commonly in the defense response.

  19. Large-format InGaAs focal plane arrays for SWIR imaging

    Science.gov (United States)

    Hood, Andrew D.; MacDougal, Michael H.; Manzo, Juan; Follman, David; Geske, Jonathan C.

    2012-06-01

    FLIR Electro Optical Components will present our latest developments in large InGaAs focal plane arrays, which are used for low light level imaging in the short wavelength infrared (SWIR) regime. FLIR will present imaging from their latest small pitch (15 μm) focal plane arrays in VGA and High Definition (HD) formats. FLIR will present characterization of the FPA including dark current measurements as well as the use of correlated double sampling to reduce read noise. FLIR will show imagery as well as FPA-level characterization data.

  20. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  1. Air sampling with solid phase microextraction

    Science.gov (United States)

    Martos, Perry Anthony

    There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds

  2. Large lattice relaxation deep levels in neutron-irradiated GaN

    International Nuclear Information System (INIS)

    Li, S.; Zhang, J.D.; Beling, C.D.; Wang, K.; Wang, R.X.; Gong, M.; Sarkar, C.K.

    2005-01-01

    Deep level transient spectroscopy (DLTS) and deep level optical spectroscopy (DLOS) measurements have been carried out in neutron-irradiated n-type hydride-vapor-phase-epitaxy-grown GaN. A defect center characterized by a DLTS line, labeled as N1, is observed at E C -E T =0.17 eV. Another line, labeled as N2, at E C -E T =0.23 eV, seems to be induced at the same rate as N1 under irradiation and may be identified with E1. Other defects native to wurtzite GaN such as the C and E2 lines appear to enhance under neutron irradiation. The DLOS results show that the defects N1 and N2 have large Frank-Condon shifts of 0.64 and 0.67 eV, respectively, and hence large lattice relaxations. The as-grown and neutron-irradiated samples all exhibit the persistent photoconductivity effect commonly seen in GaN that may be attributed to DX centers. The concentration of the DX centers increases significantly with neutron dosage and is helpful in sustaining sample conductivity at low temperatures, thus making possible DLTS measurements on N1 an N2 in the radiation-induced deep-donor defect compensated material which otherwise are prevented by carrier freeze-out

  3. Primary central nervous system diffuse large B-cell lymphoma shows an activated B-cell-like phenotype with co-expression of C-MYC, BCL-2, and BCL-6.

    Science.gov (United States)

    Li, Xiaomei; Huang, Ying; Bi, Chengfeng; Yuan, Ji; He, Hong; Zhang, Hong; Yu, QiuBo; Fu, Kai; Li, Dan

    2017-06-01

    Diffuse large B-cell lymphoma (DLBCL) is the most common non-Hodgkin lymphoma, whose main prognostic factor is closely related to germinal center B-cell-like subtype (GCB- DLBCL) or activated B-cell-like type (non-GCB-DLBCL). The most common type of primary central nervous system lymphoma is diffuse large B-cell type with poor prognosis and the reason is unclear. This study aims to stratify primary central nervous system diffuse large B-cell lymphoma (PCNS-DLBCL) according to the cell-of-origin (COO) and to investigate the multiple proteins expression of C-MYC, BCL-6, BCL-2, TP53, further to elucidate the reason why primary central nervous system diffuse large B-cell lymphoma possesses a poor clinical outcome as well. Nineteen cases of primary central nervous system DLBCL were stratified according to immunostaining algorithms of Hans, Choi and Meyer (Tally) and we investigated the multiple proteins expression of C-MYC, BCL-6, BCL-2, TP53. The Epstein-Barr virus and Borna disease virus infection were also detected. Among nineteen cases, most (15-17 cases) were assigned to the activated B-cell-like subtype, highly expression of C-MYC (15 cases, 78.9%), BCL-2 (10 cases, 52.6%), BCL-6 (15 cases, 78.9%). Unfortunately, two cases were positive for PD-L1 while PD-L2 was not expressed in any case. Two cases infected with BDV but no one infected with EBV. In conclusion, most primary central nervous system DLBCLs show an activated B-cell-like subtype characteristic and have multiple expressions of C-MYC, BCL-2, BCL-6 protein, these features might be significant factor to predict the outcome and guide treatment of PCNS-DLBCLs. Copyright © 2017 Elsevier GmbH. All rights reserved.

  4. Cross-validation of the Dot Counting Test in a large sample of credible and non-credible patients referred for neuropsychological testing.

    Science.gov (United States)

    McCaul, Courtney; Boone, Kyle B; Ermshar, Annette; Cottingham, Maria; Victor, Tara L; Ziegler, Elizabeth; Zeller, Michelle A; Wright, Matthew

    2018-01-18

    To cross-validate the Dot Counting Test in a large neuropsychological sample. Dot Counting Test scores were compared in credible (n = 142) and non-credible (n = 335) neuropsychology referrals. Non-credible patients scored significantly higher than credible patients on all Dot Counting Test scores. While the original E-score cut-off of ≥17 achieved excellent specificity (96.5%), it was associated with mediocre sensitivity (52.8%). However, the cut-off could be substantially lowered to ≥13.80, while still maintaining adequate specificity (≥90%), and raising sensitivity to 70.0%. Examination of non-credible subgroups revealed that Dot Counting Test sensitivity in feigned mild traumatic brain injury (mTBI) was 55.8%, whereas sensitivity was 90.6% in patients with non-credible cognitive dysfunction in the context of claimed psychosis, and 81.0% in patients with non-credible cognitive performance in depression or severe TBI. Thus, the Dot Counting Test may have a particular role in detection of non-credible cognitive symptoms in claimed psychiatric disorders. Alternative to use of the E-score, failure on ≥1 cut-offs applied to individual Dot Counting Test scores (≥6.0″ for mean grouped dot counting time, ≥10.0″ for mean ungrouped dot counting time, and ≥4 errors), occurred in 11.3% of the credible sample, while nearly two-thirds (63.6%) of the non-credible sample failed one of more of these cut-offs. An E-score cut-off of 13.80, or failure on ≥1 individual score cut-offs, resulted in few false positive identifications in credible patients, and achieved high sensitivity (64.0-70.0%), and therefore appear appropriate for use in identifying neurocognitive performance invalidity.

  5. Effect of simulated sampling disturbance on creep behaviour of rock salt

    Science.gov (United States)

    Guessous, Z.; Gill, D. E.; Ladanyi, B.

    1987-10-01

    This article presents the results of an experimental study of creep behaviour of a rock salt under uniaxial compression as a function of prestrain, simulating sampling disturbance. The prestrain was produced by radial compressive loading of the specimens prior to creep testing. The tests were conducted on an artifical salt to avoid excessive scattering of the results. The results obtained from several series of single-stage creep tests show that, at short-term, the creep response of salt is strongly affected by the preloading history of samples. The nature of this effect depends upon the intensity of radial compressive preloading, and its magnitude is a function of the creep stress level. The effect, however, decreases with increasing plastic deformation, indicating that large creep strains may eventually lead to a complete loss of preloading memory.

  6. Inference for Local Distributions at High Sampling Frequencies: A Bootstrap Approach

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Varneskov, Rasmus T.

    of "large" jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual...... empirical process and the stochastic scale estimate, respectively, as well as an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory....... Finally, we introduce LDWB-aided Kolmogorov-Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local...

  7. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  8. Neutron activation analysis of chemical impurities in manipulated samples of omeprazole

    Energy Technology Data Exchange (ETDEWEB)

    Sepe, Fernanda Peixoto; Leal, Alexandre Soares; Gomes, Tatiana Cristina Bomfim; Menezes, Maria Angela de Barros Correia; Silva, Maria Aparecida, E-mail: asleal@cdtn.br [Nuclear Technology Development Centre/Brazilian Commission for Nuclear Energy (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    In this work, samples of Omeprazole (C{sub 17}H{sub 19}N{sub 3}O{sub 3}S), a largely used drug in the treatment of dyspepsia and peptic ulcer, were acquired from five different pharmacies of manipulation - or retail pharmacies which prepare personalized drugs under medical recommendation - in Belo Horizonte/Brazil and investigated using the k{sub 0} - Neutron Activation Analysis (NAA). The preliminary results showed the presence of elements not foreseen in the original formula. It confirms the potential risk offered by medicines without suitable inspection. (author)

  9. Human Milk Shows Immunological Advantages Over Organic Milk Samples For Infants in the Presence of Lipopolysaccharide (LPS in 3D Energy Maps Using an Organic Nanobiomimetic Memristor/Memcapacitor

    Directory of Open Access Journals (Sweden)

    S-H. DUH

    2016-08-01

    Full Text Available Human milk is well known for its immunological advantages of protection and support for healthy early childhood cognitive development and prevention of chronic diseases over cow milk for infants. However, little is known about how the immunological advantages are linked to reduce Pathological High Frequency Oscillation (pHFO regarding neural synapse net energy outcomes when lipopolysaccharide (LPS attacks at a clinical concentration range compared with that in cow milk in a 3D energy map. We developed a nanostructure biomimetic memristor/memcapacitor device with a dual function of chronoamperometric (CA sensing/voltage sensing for the direct quantitative evaluation of immunological advantages between human milk and organic cow milk for infants in the presence of wide LPS concentration ranges; those ranges were between 5.0 pg/mL to 500 ng/mL and from 50 ng/mL to 1 µg/mL for both a CA and a voltage method, respectively. The Detection of Limit (DOL results are as follows: 3.73×10-18 g LPS vs. 1.2×10-16 g LPS in 40 µL milk samples using the 3.11×10-7cm3 voltage sensor and the 0.031cm2 CA sensor, respectively, under antibody-free and reagent-free conditions. The 3D energy map results show that cow milk is ten-times more prone to E. Coli attack, and the positive link was revealed that Pathological High Frequency Oscillation (pHFO formations occurred over the studied LPS concentration range from 50 ng/mL up to 1000 ng/mL from Rapid Eye Movement (REM sleep frequency, fast gamma frequency to Sharp Wave-Ripple Complexes (SPW- R frequency. There had no pHFO with human milk samples at Slow Wave Sleeping (SWS, REM and SPW- R frequencies. The microbiota in the human milk samples successfully overcame the endotoxin attack from E. coli bacteria, however the pHFO only occurred at fast gamma frequency linked with the LPS level ≥ 500 ng/mL. Organic milk samples show an order of magnitude lower synapse energy density compared with human milk at SWS for with

  10. Direct analysis of radionuclides-96 samples simultaneously

    International Nuclear Information System (INIS)

    Kessler, M.J.

    1991-01-01

    Recently, there has been a tremendous interest in two areas of concern in nuclear counting and radioactivity waste disposal. The first is the reduction of radioactive waste, in particular the reduction in the amount of or the development of environmentally safe scintillation cocktails. The second is the development of a simple method of quantitating large numbers of samples (thousands/day) in a short period of time (minutes). These two areas of concern have been addressed with the development of the Matrix 96 direct beta counter. This new instrumental technique is capable of quantitating 96 samples simultaneously in the microplate format (8 x 12, 96 sample) on a solid support WITHOUT the use of any cocktails, vials, and is non-destructive to the sample. The use of this technique for the following biomedical applications, DNA dot blots, cell proliferation (3H thymidine), receptor binding, chromium cytotoxicity assays, and protein assays will be discussed in detail. The data from both the conventional beta and gamma counter will be correlated and compared to the new Matrix 96 direct beta counter. This new technique provides a convenient method of addressing the concerns of reducing radioactive waste and provides a method of quantitating a large number of samples, accurately in a short period of time (96 at a time)

  11. Approach-Induced Biases in Human Information Sampling.

    Directory of Open Access Journals (Sweden)

    Laurence T Hunt

    2016-11-01

    Full Text Available Information sampling is often biased towards seeking evidence that confirms one's prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled ("positive evidence approach", the selection of which information to sample ("sampling the favorite", and the interaction between information sampling and subsequent choices ("rejecting unsampled options". The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action.

  12. Super-sample covariance approximations and partial sky coverage

    Science.gov (United States)

    Lacasa, Fabien; Lima, Marcos; Aguena, Michel

    2018-04-01

    Super-sample covariance (SSC) is the dominant source of statistical error on large scale structure (LSS) observables for both current and future galaxy surveys. In this work, we concentrate on the SSC of cluster counts, also known as sample variance, which is particularly useful for the self-calibration of the cluster observable-mass relation; our approach can similarly be applied to other observables, such as galaxy clustering and lensing shear. We first examined the accuracy of two analytical approximations proposed in the literature for the flat sky limit, finding that they are accurate at the 15% and 30-35% level, respectively, for covariances of counts in the same redshift bin. We then developed a harmonic expansion formalism that allows for the prediction of SSC in an arbitrary survey mask geometry, such as large sky areas of current and future surveys. We show analytically and numerically that this formalism recovers the full sky and flat sky limits present in the literature. We then present an efficient numerical implementation of the formalism, which allows fast and easy runs of covariance predictions when the survey mask is modified. We applied our method to a mask that is broadly similar to the Dark Energy Survey footprint, finding a non-negligible negative cross-z covariance, i.e. redshift bins are anti-correlated. We also examined the case of data removal from holes due to, for example bright stars, quality cuts, or systematic removals, and find that this does not have noticeable effects on the structure of the SSC matrix, only rescaling its amplitude by the effective survey area. These advances enable analytical covariances of LSS observables to be computed for current and future galaxy surveys, which cover large areas of the sky where the flat sky approximation fails.

  13. On Invertible Sampling and Adaptive Security

    DEFF Research Database (Denmark)

    Ishai, Yuval; Kumarasubramanian, Abishek; Orlandi, Claudio

    2011-01-01

    functionalities was left open. We provide the first convincing evidence that the answer to this question is negative, namely that some (randomized) functionalities cannot be realized with adaptive security. We obtain this result by studying the following related invertible sampling problem: given an efficient...... sampling algorithm A, obtain another sampling algorithm B such that the output of B is computationally indistinguishable from the output of A, but B can be efficiently inverted (even if A cannot). This invertible sampling problem is independently motivated by other cryptographic applications. We show......, under strong but well studied assumptions, that there exist efficient sampling algorithms A for which invertible sampling as above is impossible. At the same time, we show that a general feasibility result for adaptively secure MPC implies that invertible sampling is possible for every A, thereby...

  14. Small sample whole-genome amplification

    Science.gov (United States)

    Hara, Christine; Nguyen, Christine; Wheeler, Elizabeth; Sorensen, Karen; Arroyo, Erin; Vrankovich, Greg; Christian, Allen

    2005-11-01

    Many challenges arise when trying to amplify and analyze human samples collected in the field due to limitations in sample quantity, and contamination of the starting material. Tests such as DNA fingerprinting and mitochondrial typing require a certain sample size and are carried out in large volume reactions; in cases where insufficient sample is present whole genome amplification (WGA) can be used. WGA allows very small quantities of DNA to be amplified in a way that enables subsequent DNA-based tests to be performed. A limiting step to WGA is sample preparation. To minimize the necessary sample size, we have developed two modifications of WGA: the first allows for an increase in amplified product from small, nanoscale, purified samples with the use of carrier DNA while the second is a single-step method for cleaning and amplifying samples all in one column. Conventional DNA cleanup involves binding the DNA to silica, washing away impurities, and then releasing the DNA for subsequent testing. We have eliminated losses associated with incomplete sample release, thereby decreasing the required amount of starting template for DNA testing. Both techniques address the limitations of sample size by providing ample copies of genomic samples. Carrier DNA, included in our WGA reactions, can be used when amplifying samples with the standard purification method, or can be used in conjunction with our single-step DNA purification technique to potentially further decrease the amount of starting sample necessary for future forensic DNA-based assays.

  15. Preliminary level 2 specification for the nested, fixed-depth sampling system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    This preliminary Level 2 Component Specification establishes the performance, design, development, and test requirements for the in-tank sampling system which will support the BNFL contract in the final disposal of Hanford's High Level Wastes (HLW) and Low Activity Wastes (LAW). The PHMC will provide Low Activity Wastes (LAW) tank wastes for final treatment by BNFL from double-shell feed tanks. Concerns about the inability of the baseline ''grab'' sampling to provide large volume samples within time constraints has led to the development of a nested, fixed-depth sampling system. This sampling system will provide large volume? representative samples without the environmental, radiation exposure, and sample volume Impacts of the current base-line ''grab'' sampling method. This preliminary Level 2 Component Specification is not a general specification for tank sampling, but is based on a ''record of decision'', AGA (HNF-SD-TWR-AGA-001 ), the System Specification for the Double Shell Tank System (HNF-SD-WM-TRD-O07), and the BNFL privatization contract

  16. Attenuation of species abundance distributions by sampling

    Science.gov (United States)

    Shimadzu, Hideyasu; Darnell, Ross

    2015-01-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  17. A modular approach to creating large engineered cartilage surfaces.

    Science.gov (United States)

    Ford, Audrey C; Chui, Wan Fung; Zeng, Anne Y; Nandy, Aditya; Liebenberg, Ellen; Carraro, Carlo; Kazakia, Galateia; Alliston, Tamara; O'Connell, Grace D

    2018-01-23

    Native articular cartilage has limited capacity to repair itself from focal defects or osteoarthritis. Tissue engineering has provided a promising biological treatment strategy that is currently being evaluated in clinical trials. However, current approaches in translating these techniques to developing large engineered tissues remains a significant challenge. In this study, we present a method for developing large-scale engineered cartilage surfaces through modular fabrication. Modular Engineered Tissue Surfaces (METS) uses the well-known, but largely under-utilized self-adhesion properties of de novo tissue to create large scaffolds with nutrient channels. Compressive mechanical properties were evaluated throughout METS specimens, and the tensile mechanical strength of the bonds between attached constructs was evaluated over time. Raman spectroscopy, biochemical assays, and histology were performed to investigate matrix distribution. Results showed that by Day 14, stable connections had formed between the constructs in the METS samples. By Day 21, bonds were robust enough to form a rigid sheet and continued to increase in size and strength over time. Compressive mechanical properties and glycosaminoglycan (GAG) content of METS and individual constructs increased significantly over time. The METS technique builds on established tissue engineering accomplishments of developing constructs with GAG composition and compressive properties approaching native cartilage. This study demonstrated that modular fabrication is a viable technique for creating large-scale engineered cartilage, which can be broadly applied to many tissue engineering applications and construct geometries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Pinus pinaster seedlings and their fungal symbionts show high plasticity in phosphorus acquisition in acidic soils.

    Science.gov (United States)

    Ali, M A; Louche, J; Legname, E; Duchemin, M; Plassard, C

    2009-12-01

    Young seedlings of maritime pine (Pinus pinaster Soland in Aït.) were grown in rhizoboxes using intact spodosol soil samples from the southwest of France, in Landes of Gascogne, presenting a large variation of phosphorus (P) availability. Soils were collected from a 93-year-old unfertilized stand and a 13-year-old P. pinaster stand with regular annual fertilization of either only P or P and nitrogen (N). After 6 months of culture in controlled conditions, different morphotypes of ectomycorrhiza (ECM) were used for the measurements of acid phosphatase activity and molecular identification of fungal species using amplification of the ITS region. Total biomass, N and P contents were measured in roots and shoots of plants. Bicarbonate- and NaOH-available inorganic P (Pi), organic P (Po) and ergosterol concentrations were measured in bulk and rhizosphere soil. The results showed that bulk soil from the 93-year-old forest stand presented the highest Po levels, but relatively higher bicarbonate-extractable Pi levels compared to 13-year-old unfertilized stand. Fertilizers significantly increased the concentrations of inorganic P fractions in bulk soil. Ergosterol contents in rhizosphere soil were increased by fertilizer application. The dominant fungal species was Rhizopogon luteolus forming 66.6% of analysed ECM tips. Acid phosphatase activity was highly variable and varied inversely with bicarbonate-extractable Pi levels in the rhizosphere soil. Total P or total N in plants was linearly correlated with total plant biomass, but the slope was steep only between total P and biomass in fertilized soil samples. In spite of high phosphatase activity in ECM tips, P availability remained a limiting nutrient in soil samples from unfertilized stands. Nevertheless young P. pinaster seedlings showed a high plasticity for biomass production at low P availability in soils.

  19. Prevalence of DSM-IV and DSM-5 Alcohol, Cocaine, Opioid, and Cannabis Use Disorders in a Largely Substance Dependent Sample

    Science.gov (United States)

    Peer, Kyle; Rennert, Lior; Lynch, Kevin G.; Farrer, Lindsay; Gelernter, Joel; Kranzler, Henry R.

    2012-01-01

    BACKGROUND The fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) will soon replace the DSM-IV, which has existed for nearly two decades. The changes in diagnostic criteria have important implications for research and for the clinical care of individuals with Substance Use Disorders (SUDs). METHODS We used the Semi-Structured Assessment for Drug Dependence and Alcoholism to evaluate the lifetime presence of DSM-IV abuse and dependence diagnoses and DSM-5 mild, moderate, or severe SUDs for alcohol, cocaine, opioids, and cannabis in a sample of 7,543 individuals recruited to participate in genetic studies of substance dependence. RESULTS Switches between diagnostic systems consistently resulted in a modestly greater prevalence for DSM-5 SUDs, based largely on the assignment of DSM-5 diagnoses to DSM-IV “diagnostic ophans” (i.e., individuals meeting one or two criteria for dependence and none for abuse, and thus not receiving a DSM-IV SUD diagnosis). The vast majority of these diagnostic switches were attributable to the requirement that only two of 11 criteria be met for a DSM-5 SUD diagnosis. We found evidence to support the omission from DSM-5 of the legal criterion due to its limited diagnostic utility. The addition of craving as a criterion in DSM-5 did not substantially affect the likelihood of an SUD diagnosis. CONCLUSION The greatest advantage of DSM-5 appears to be its ability to capture diagnostic orphans. In this sample, changes reflected in DSM-5 had a minimal impact on the prevalence of SUD diagnoses. PMID:22884164

  20. Fabrication of large Ti–6Al–4V structures by direct laser deposition

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, Chunlei; Ravi, G.A. [School of Metallurgy and Materials, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom); Dance, Chris; Ranson, Andrew; Dilworth, Steve [Integrated Operations, Manufacturing & Materials Engineering Department, BAE Systems Ltd (United Kingdom); Attallah, Moataz M., E-mail: m.m.attallah@bham.ac.uk [School of Metallurgy and Materials, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)

    2015-04-25

    Highlights: • High laser power and a reasonably low powder feed rate are key to low porosity. • Scaling-up of samples requires smaller Z steps to achieve geometrical integrity. • HIPing effectively closed pores, changed microstructure and improved ductility. • Optimised processing conditions plus HIPing led to good quality Ti-64 structures. • HIPing helps recover shape of unclamped large structures from distortion. - Abstract: Ti–6Al–4V samples have been prepared by direct laser deposition (DLD) using varied processing conditions. Some of the as-fabricated samples were stress-relieved or hot isostatically pressed (HIPed). The microstructures of all the samples were characterised using optical microscopy (OM), scanning electron microscopy (SEM) and X-ray diffraction (XRD) and the tensile properties assessed. It was found that a high laser power together with a reasonably low powder feed rate was essential for achieving minimum porosity. The build height and geometrical integrity of samples were sensitive to the specified laser nozzle moving step along the build height direction (or Z step) with a too big Z step usually leading to a build height smaller than specified height (or under build) and a too small Z step to excessive building (or excess build). Particularly, scaling-up of samples requires a smaller Z step to obtain specified build height and geometry. The as-fabricated microstructure was characterised by columnar grains together with martensitic needle structure and a small fraction of β phase. This led generally to high tensile strengths but low elongations. The vertically machined samples showed even lower elongation than horizontally machined ones due to the presence of large lack-of-fusion pores at interlayer interfaces. HIPing effectively closed pores and fully transformed the martensites into lamellar α + β phases, which considerably improved ductility but caused slight reduction in strength. With optimisation of processing conditions