WorldWideScience

Sample records for large sample numbers

  1. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  2. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  4. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  5. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  6. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  7. Large number discrimination by mosquitofish.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Recent studies have demonstrated that fish display rudimentary numerical abilities similar to those observed in mammals and birds. The mechanisms underlying the discrimination of small quantities (<4 were recently investigated while, to date, no study has examined the discrimination of large numerosities in fish. METHODOLOGY/PRINCIPAL FINDINGS: Subjects were trained to discriminate between two sets of small geometric figures using social reinforcement. In the first experiment mosquitofish were required to discriminate 4 from 8 objects with or without experimental control of the continuous variables that co-vary with number (area, space, density, total luminance. Results showed that fish can use the sole numerical information to compare quantities but that they preferentially use cumulative surface area as a proxy of the number when this information is available. A second experiment investigated the influence of the total number of elements to discriminate large quantities. Fish proved to be able to discriminate up to 100 vs. 200 objects, without showing any significant decrease in accuracy compared with the 4 vs. 8 discrimination. The third experiment investigated the influence of the ratio between the numerosities. Performance was found to decrease when decreasing the numerical distance. Fish were able to discriminate numbers when ratios were 1:2 or 2:3 but not when the ratio was 3:4. The performance of a sample of undergraduate students, tested non-verbally using the same sets of stimuli, largely overlapped that of fish. CONCLUSIONS/SIGNIFICANCE: Fish are able to use pure numerical information when discriminating between quantities larger than 4 units. As observed in human and non-human primates, the numerical system of fish appears to have virtually no upper limit while the numerical ratio has a clear effect on performance. These similarities further reinforce the view of a common origin of non-verbal numerical systems in all

  8. Prediction of the number of 14 MeV neutron elastically scattered from large sample of aluminium using Monte Carlo simulation method

    International Nuclear Information System (INIS)

    Husin Wagiran; Wan Mohd Nasir Wan Kadir

    1997-01-01

    In neutron scattering processes, the effect of multiple scattering is to cause an effective increase in the measured cross-sections due to increase on the probability of neutron scattering interactions in the sample. Analysis of how the effective cross-section varies with thickness is very complicated due to complicated sample geometries and the variations of scattering cross-section with energy. Monte Carlo method is one of the possible method for treating the multiple scattering processes in the extended sample. In this method a lot of approximations have to be made and the accurate data of microscopic cross-sections are needed at various angles. In the present work, a Monte Carlo simulation programme suitable for a small computer was developed. The programme was capable to predict the number of neutrons scattered from various thickness of aluminium samples at all possible angles between 00 to 36011 with 100 increments. In order to make the the programme not too complicated and capable of being run on microcomputer with reasonable time, the calculations was done in two dimension coordinate system. The number of neutrons predicted from this model show in good agreement with previous experimental results

  9. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  10. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  11. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  12. Thermal convection for large Prandtl numbers

    NARCIS (Netherlands)

    Grossmann, Siegfried; Lohse, Detlef

    2001-01-01

    The Rayleigh-Bénard theory by Grossmann and Lohse [J. Fluid Mech. 407, 27 (2000)] is extended towards very large Prandtl numbers Pr. The Nusselt number Nu is found here to be independent of Pr. However, for fixed Rayleigh numbers Ra a maximum in the Nu(Pr) dependence is predicted. We moreover offer

  13. Evaluation of PCR procedures for detecting and quantifying Leishmania donovani DNA in large numbers of dried human blood samples from a visceral leishmaniasis focus in Northern Ethiopia.

    Science.gov (United States)

    Abbasi, Ibrahim; Aramin, Samar; Hailu, Asrat; Shiferaw, Welelta; Kassahun, Aysheshm; Belay, Shewaye; Jaffe, Charles; Warburg, Alon

    2013-03-27

    Visceral Leishmaniasis (VL) is a disseminated protozoan infection caused by Leishmania donovani parasites which affects almost half a million persons annually. Most of these are from the Indian sub-continent, East Africa and Brazil. Our study was designed to elucidate the role of symptomatic and asymptomatic Leishmania donovani infected persons in the epidemiology of VL in Northern Ethiopia. The efficacy of quantitative real-time kinetoplast DNA/PCR (qRT-kDNA PCR) for detecting Leishmania donovani in dried-blood samples was assessed in volunteers living in an endemic focus. Of 4,757 samples, 680 (14.3%) were found positive for Leishmania k-DNA but most of those (69%) had less than 10 parasites/ml of blood. Samples were re-tested using identical protocols and only 59.3% of the samples with 10 parasite/ml or less were qRT-kDNA PCR positive the second time. Furthermore, 10.8% of the PCR negative samples were positive in the second test. Most samples with higher parasitemias remained positive upon re-examination (55/59 =93%). We also compared three different methods for DNA preparation. Phenol-chloroform was more efficient than sodium hydroxide or potassium acetate. DNA sequencing of ITS1 PCR products showed that 20/22 samples were Leishmania donovani while two had ITS1 sequences homologous to Leishmania major. Although qRT-kDNA PCR is a highly sensitive test, the dependability of low positives remains questionable. It is crucial to correlate between PCR parasitemia and infectivity to sand flies. While optimal sensitivity is achieved by targeting k-DNA, it is important to validate the causative species of VL by DNA sequencing.

  14. Genetic Characterization of Echinococcus granulosus from a Large Number of Formalin-Fixed, Paraffin-Embedded Tissue Samples of Human Isolates in Iran

    Science.gov (United States)

    Rostami, Sima; Torbaghan, Shams Shariat; Dabiri, Shahriar; Babaei, Zahra; Mohammadi, Mohammad Ali; Sharbatkhori, Mitra; Harandi, Majid Fasihi

    2015-01-01

    Cystic echinococcosis (CE), caused by the larval stage of Echinococcus granulosus, presents an important medical and veterinary problem globally, including that in Iran. Different genotypes of E. granulosus have been reported from human isolates worldwide. This study identifies the genotype of the parasite responsible for human hydatidosis in three provinces of Iran using formalin-fixed paraffin-embedded tissue samples. In this study, 200 formalin-fixed paraffin-embedded tissue samples from human CE cases were collected from Alborz, Tehran, and Kerman provinces. Polymerase chain reaction amplification and sequencing of the partial mitochondrial cytochrome c oxidase subunit 1 gene were performed for genetic characterization of the samples. Phylogenetic analysis of the isolates from this study and reference sequences of different genotypes was done using a maximum likelihood method. In total, 54.4%, 0.8%, 1%, and 40.8% of the samples were identified as the G1, G2, G3, and G6 genotypes, respectively. The findings of the current study confirm the G1 genotype (sheep strain) to be the most prevalent genotype involved in human CE cases in Iran and indicates the high prevalence of the G6 genotype with a high infectivity for humans. Furthermore, this study illustrates the first documented human CE case in Iran infected with the G2 genotype. PMID:25535316

  15. Large number discrimination in newborn fish.

    Directory of Open Access Journals (Sweden)

    Laura Piffer

    Full Text Available Quantitative abilities have been reported in a wide range of species, including fish. Recent studies have shown that adult guppies (Poecilia reticulata can spontaneously select the larger number of conspecifics. In particular the evidence collected in literature suggest the existence of two distinct systems of number representation: a precise system up to 4 units, and an approximate system for larger numbers. Spontaneous numerical abilities, however, seem to be limited to 4 units at birth and it is currently unclear whether or not the large number system is absent during the first days of life. In the present study, we investigated whether newborn guppies can be trained to discriminate between large quantities. Subjects were required to discriminate between groups of dots with a 0.50 ratio (e.g., 7 vs. 14 in order to obtain a food reward. To dissociate the roles of number and continuous quantities that co-vary with numerical information (such as cumulative surface area, space and density, three different experiments were set up: in Exp. 1 number and continuous quantities were simultaneously available. In Exp. 2 we controlled for continuous quantities and only numerical information was available; in Exp. 3 numerical information was made irrelevant and only continuous quantities were available. Subjects successfully solved the tasks in Exp. 1 and 2, providing the first evidence of large number discrimination in newborn fish. No discrimination was found in experiment 3, meaning that number acuity is better than spatial acuity. A comparison with the onset of numerical abilities observed in shoal-choice tests suggests that training procedures can promote the development of numerical abilities in guppies.

  16. Getting DNA copy numbers without control samples

    Directory of Open Access Journals (Sweden)

    Ortiz-Estevez Maria

    2012-08-01

    Full Text Available Abstract Background The selection of the reference to scale the data in a copy number analysis has paramount importance to achieve accurate estimates. Usually this reference is generated using control samples included in the study. However, these control samples are not always available and in these cases, an artificial reference must be created. A proper generation of this signal is crucial in terms of both noise and bias. We propose NSA (Normality Search Algorithm, a scaling method that works with and without control samples. It is based on the assumption that genomic regions enriched in SNPs with identical copy numbers in both alleles are likely to be normal. These normal regions are predicted for each sample individually and used to calculate the final reference signal. NSA can be applied to any CN data regardless the microarray technology and preprocessing method. It also finds an optimal weighting of the samples minimizing possible batch effects. Results Five human datasets (a subset of HapMap samples, Glioblastoma Multiforme (GBM, Ovarian, Prostate and Lung Cancer experiments have been analyzed. It is shown that using only tumoral samples, NSA is able to remove the bias in the copy number estimation, to reduce the noise and therefore, to increase the ability to detect copy number aberrations (CNAs. These improvements allow NSA to also detect recurrent aberrations more accurately than other state of the art methods. Conclusions NSA provides a robust and accurate reference for scaling probe signals data to CN values without the need of control samples. It minimizes the problems of bias, noise and batch effects in the estimation of CNs. Therefore, NSA scaling approach helps to better detect recurrent CNAs than current methods. The automatic selection of references makes it useful to perform bulk analysis of many GEO or ArrayExpress experiments without the need of developing a parser to find the normal samples or possible batches within the

  17. Getting DNA copy numbers without control samples.

    Science.gov (United States)

    Ortiz-Estevez, Maria; Aramburu, Ander; Rubio, Angel

    2012-08-16

    The selection of the reference to scale the data in a copy number analysis has paramount importance to achieve accurate estimates. Usually this reference is generated using control samples included in the study. However, these control samples are not always available and in these cases, an artificial reference must be created. A proper generation of this signal is crucial in terms of both noise and bias.We propose NSA (Normality Search Algorithm), a scaling method that works with and without control samples. It is based on the assumption that genomic regions enriched in SNPs with identical copy numbers in both alleles are likely to be normal. These normal regions are predicted for each sample individually and used to calculate the final reference signal. NSA can be applied to any CN data regardless the microarray technology and preprocessing method. It also finds an optimal weighting of the samples minimizing possible batch effects. Five human datasets (a subset of HapMap samples, Glioblastoma Multiforme (GBM), Ovarian, Prostate and Lung Cancer experiments) have been analyzed. It is shown that using only tumoral samples, NSA is able to remove the bias in the copy number estimation, to reduce the noise and therefore, to increase the ability to detect copy number aberrations (CNAs). These improvements allow NSA to also detect recurrent aberrations more accurately than other state of the art methods. NSA provides a robust and accurate reference for scaling probe signals data to CN values without the need of control samples. It minimizes the problems of bias, noise and batch effects in the estimation of CNs. Therefore, NSA scaling approach helps to better detect recurrent CNAs than current methods. The automatic selection of references makes it useful to perform bulk analysis of many GEO or ArrayExpress experiments without the need of developing a parser to find the normal samples or possible batches within the data. The method is available in the open-source R package

  18. Large numbers hypothesis. II - Electromagnetic radiation

    Science.gov (United States)

    Adams, P. J.

    1983-01-01

    This paper develops the theory of electromagnetic radiation in the units covariant formalism incorporating Dirac's large numbers hypothesis (LNH). A direct field-to-particle technique is used to obtain the photon propagation equation which explicitly involves the photon replication rate. This replication rate is fixed uniquely by requiring that the form of a free-photon distribution function be preserved, as required by the 2.7 K cosmic radiation. One finds that with this particular photon replication rate the units covariant formalism developed in Paper I actually predicts that the ratio of photon number to proton number in the universe varies as t to the 1/4, precisely in accord with LNH. The cosmological red-shift law is also derived and it is shown to differ considerably from the standard form of (nu)(R) - const.

  19. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  20. Forecasting distribution of numbers of large fires

    Science.gov (United States)

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  1. Modified large number theory with constant G

    International Nuclear Information System (INIS)

    Recami, E.

    1983-01-01

    The inspiring ''numerology'' uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the ''gravitational world'' (cosmos) with the ''strong world'' (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the ''Large Number Theory,'' cosmos and hadrons are considered to be (finite) similar systems, so that the ratio R-bar/r-bar of the cosmos typical length R-bar to the hadron typical length r-bar is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle: according to the ''cyclical big-bang'' hypothesis: then R-bar and r-bar can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P.Caldirola, G. D. Maccarrone, and M. Pavsic

  2. Hierarchies in Quantum Gravity: Large Numbers, Small Numbers, and Axions

    Science.gov (United States)

    Stout, John Eldon

    Our knowledge of the physical world is mediated by relatively simple, effective descriptions of complex processes. By their very nature, these effective theories obscure any phenomena outside their finite range of validity, discarding information crucial to understanding the full, quantum gravitational theory. However, we may gain enormous insight into the full theory by understanding how effective theories with extreme characteristics--for example, those which realize large-field inflation or have disparate hierarchies of scales--can be naturally realized in consistent theories of quantum gravity. The work in this dissertation focuses on understanding the quantum gravitational constraints on these "extreme" theories in well-controlled corners of string theory. Axion monodromy provides one mechanism for realizing large-field inflation in quantum gravity. These models spontaneously break an axion's discrete shift symmetry and, assuming that the corrections induced by this breaking remain small throughout the excursion, create a long, quasi-flat direction in field space. This weakly-broken shift symmetry has been used to construct a dynamical solution to the Higgs hierarchy problem, dubbed the "relaxion." We study this relaxion mechanism and show that--without major modifications--it can not be naturally embedded within string theory. In particular, we find corrections to the relaxion potential--due to the ten-dimensional backreaction of monodromy charge--that conflict with naive notions of technical naturalness and render the mechanism ineffective. The super-Planckian field displacements necessary for large-field inflation may also be realized via the collective motion of many aligned axions. However, it is not clear that string theory provides the structures necessary for this to occur. We search for these structures by explicitly constructing the leading order potential for C4 axions and computing the maximum possible field displacement in all compactifications of

  3. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  4. New feature for an old large number

    International Nuclear Information System (INIS)

    Novello, M.; Oliveira, L.R.A.

    1986-01-01

    A new context for the appearance of the Eddington number (10 39 ), which is due to the examination of elastic scattering of scalar particles (ΠK → ΠK) non-minimally coupled to gravity, is presented. (author) [pt

  5. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  6. 21 CFR 203.38 - Sample lot or control numbers; labeling of sample units.

    Science.gov (United States)

    2010-04-01

    ... numbers; labeling of sample units. (a) Lot or control number required on drug sample labeling and sample... identifying lot or control number that will permit the tracking of the distribution of each drug sample unit... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Sample lot or control numbers; labeling of sample...

  7. Analysis of large soil samples for actinides

    Science.gov (United States)

    Maxwell, III; Sherrod, L [Aiken, SC

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  8. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  9. Large sample NAA facility and methodology development

    International Nuclear Information System (INIS)

    Roth, C.; Gugiu, D.; Barbos, D.; Datcu, A.; Aioanei, L.; Dobrea, D.; Taroiu, I. E.; Bucsa, A.; Ghinescu, A.

    2013-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) facility has been developed at the TRIGA- Annular Core Pulsed Reactor (ACPR) operated by the Institute for Nuclear Research in Pitesti, Romania. The central irradiation cavity of the ACPR core can accommodate a large irradiation device. The ACPR neutron flux characteristics are well known and spectrum adjustment techniques have been successfully applied to enhance the thermal component of the neutron flux in the central irradiation cavity. An analysis methodology was developed by using the MCNP code in order to estimate counting efficiency and correction factors for the major perturbing phenomena. Test experiments, comparison with classical instrumental neutron activation analysis (INAA) methods and international inter-comparison exercise have been performed to validate the new methodology. (authors)

  10. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  11. Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers

    Science.gov (United States)

    Balasubramaniam, R.; Subramanian, R. S.

    1996-01-01

    The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.

  12. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  13. On a strong law of large numbers for monotone measures

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Ouyang, Y.

    2013-01-01

    Roč. 83, č. 4 (2013), s. 1213-1218 ISSN 0167-7152 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : capacity * Choquet integral * strong law of large numbers Subject RIV: BA - General Mathematics Impact factor: 0.531, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-on a strong law of large numbers for monotone measures.pdf

  14. A Chain Perspective on Large-scale Number Systems

    NARCIS (Netherlands)

    Grijpink, J.H.A.M.

    2012-01-01

    As large-scale number systems gain significance in social and economic life (electronic communication, remote electronic authentication), the correct functioning and the integrity of public number systems take on crucial importance. They are needed to uniquely indicate people, objects or phenomena

  15. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  16. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  17. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  18. The large numbers hypothesis and a relativistic theory of gravitation

    International Nuclear Information System (INIS)

    Lau, Y.K.; Prokhovnik, S.J.

    1986-01-01

    A way to reconcile Dirac's large numbers hypothesis and Einstein's theory of gravitation was recently suggested by Lau (1985). It is characterized by the conjecture of a time-dependent cosmological term and gravitational term in Einstein's field equations. Motivated by this conjecture and the large numbers hypothesis, we formulate here a scalar-tensor theory in terms of an action principle. The cosmological term is required to be spatially dependent as well as time dependent in general. The theory developed is appled to a cosmological model compatible with the large numbers hypothesis. The time-dependent form of the cosmological term and the scalar potential are then deduced. A possible explanation of the smallness of the cosmological term is also given and the possible significance of the scalar field is speculated

  19. Fatal crashes involving large numbers of vehicles and weather.

    Science.gov (United States)

    Wang, Ying; Liang, Liming; Evans, Leonard

    2017-12-01

    Adverse weather has been recognized as a significant threat to traffic safety. However, relationships between fatal crashes involving large numbers of vehicles and weather are rarely studied according to the low occurrence of crashes involving large numbers of vehicles. By using all 1,513,792 fatal crashes in the Fatality Analysis Reporting System (FARS) data, 1975-2014, we successfully described these relationships. We found: (a) fatal crashes involving more than 35 vehicles are most likely to occur in snow or fog; (b) fatal crashes in rain are three times as likely to involve 10 or more vehicles as fatal crashes in good weather; (c) fatal crashes in snow [or fog] are 24 times [35 times] as likely to involve 10 or more vehicles as fatal crashes in good weather. If the example had used 20 vehicles, the risk ratios would be 6 for rain, 158 for snow, and 171 for fog. To reduce the risk of involvement in fatal crashes with large numbers of vehicles, drivers should slow down more than they currently do under adverse weather conditions. Driver deaths per fatal crash increase slowly with increasing numbers of involved vehicles when it is snowing or raining, but more steeply when clear or foggy. We conclude that in order to reduce risk of involvement in crashes involving large numbers of vehicles, drivers must reduce speed in fog, and in snow or rain, reduce speed by even more than they already do. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.

  20. On Independence for Capacities with Law of Large Numbers

    OpenAIRE

    Huang, Weihuan

    2017-01-01

    This paper introduces new notions of Fubini independence and Exponential independence of random variables under capacities to fit Ellsberg's model, and finds out the relationships between Fubini independence, Exponential independence, MacCheroni and Marinacci's independence and Peng's independence. As an application, we give a weak law of large numbers for capacities under Exponential independence.

  1. Teaching Multiplication of Large Positive Whole Numbers Using ...

    African Journals Online (AJOL)

    This study investigated the teaching of multiplication of large positive whole numbers using the grating method and the effect of this method on students' performance in junior secondary schools. The study was conducted in Obio Akpor Local Government Area of Rivers state. It was quasi- experimental. Two research ...

  2. Lovelock inflation and the number of large dimensions

    CERN Document Server

    Ferrer, Francesc

    2007-01-01

    We discuss an inflationary scenario based on Lovelock terms. These higher order curvature terms can lead to inflation when there are more than three spatial dimensions. Inflation will end if the extra dimensions are stabilised, so that at most three dimensions are free to expand. This relates graceful exit to the number of large dimensions.

  3. LOGISTICS OF ECOLOGICAL SAMPLING ON LARGE RIVERS

    Science.gov (United States)

    The objectives of this document are to provide an overview of the logistical problems associated with the ecological sampling of boatable rivers and to suggest solutions to those problems. It is intended to be used as a resource for individuals preparing to collect biological dat...

  4. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  5. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  6. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  7. Fluid Mechanics of Aquatic Locomotion at Large Reynolds Numbers

    OpenAIRE

    Govardhan, RN; Arakeri, JH

    2011-01-01

    Abstract | There exist a huge range of fish species besides other aquatic organisms like squids and salps that locomote in water at large Reynolds numbers, a regime of flow where inertial forces dominate viscous forces. In the present review, we discuss the fluid mechanics governing the locomotion of such organisms. Most fishes propel themselves by periodic undulatory motions of the body and tail, and the typical classification of their swimming modes is based on the fraction of their body...

  8. Rotating thermal convection at very large Rayleigh numbers

    Science.gov (United States)

    Weiss, Stephan; van Gils, Dennis; Ahlers, Guenter; Bodenschatz, Eberhard

    2016-11-01

    The large scale thermal convection systems in geo- and astrophysics are usually influenced by Coriolis forces caused by the rotation of their celestial bodies. To better understand the influence of rotation on the convective flow field and the heat transport at these conditions, we study Rayleigh-Bénard convection, using pressurized sulfur hexaflouride (SF6) at up to 19 bars in a cylinder of diameter D=1.12 m and a height of L=2.24 m. The gas is heated from below and cooled from above and the convection cell sits on a rotating table inside a large pressure vessel (the "Uboot of Göttingen"). With this setup Rayleigh numbers of up to Ra =1015 can be reached, while Ekman numbers as low as Ek =10-8 are possible. The Prandtl number in these experiment is kept constant at Pr = 0 . 8 . We report on heat flux measurements (expressed by the Nusselt number Nu) as well as measurements from more than 150 temperature probes inside the flow. We thank the Deutsche Forschungsgemeinschaft (DFG) for financial support through SFB963: "Astrophysical Flow Instabilities and Turbulence". The work of GA was supported in part by the US National Science Foundation through Grant DMR11-58514.

  9. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  10. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  11. [Dual process in large number estimation under uncertainty].

    Science.gov (United States)

    Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento

    2016-08-01

    According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.

  12. The large number hypothesis and Einstein's theory of gravitation

    International Nuclear Information System (INIS)

    Yun-Kau Lau

    1985-01-01

    In an attempt to reconcile the large number hypothesis (LNH) with Einstein's theory of gravitation, a tentative generalization of Einstein's field equations with time-dependent cosmological and gravitational constants is proposed. A cosmological model consistent with the LNH is deduced. The coupling formula of the cosmological constant with matter is found, and as a consequence, the time-dependent formulae of the cosmological constant and the mean matter density of the Universe at the present epoch are then found. Einstein's theory of gravitation, whether with a zero or nonzero cosmological constant, becomes a limiting case of the new generalized field equations after the early epoch

  13. Combining large number of weak biomarkers based on AUC.

    Science.gov (United States)

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Quasi-isodynamic configuration with large number of periods

    International Nuclear Information System (INIS)

    Shafranov, V.D.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.; Cooper, W.A.; Kalyuzhnyj, V.N.; Kasilov, S.V.; Nemov, V.V.; Kernbichler, W.; Nuehrenberg, C.; Nuehrenberg, J.; Zille, R.

    2005-01-01

    It has been previously reported that quasi-isodynamic (qi) stellarators with poloidal direction of the contours of B on magnetic surface can exhibit very good fast- particle collisionless confinement. In addition, approaching the quasi-isodynamicity condition leads to diminished neoclassical transport and small bootstrap current. The calculations of local-mode stability show that there is a tendency toward an increasing beta limit with increasing number of periods. The consideration of the quasi-helically symmetric systems has demonstrated that with increasing aspect ratio (and number of periods) the optimized configuration approaches the straight symmetric counterpart, for which the optimal parameters and highest beta values were found by optimization of the boundary magnetic surface cross-section. The qi system considered here with zero net toroidal current do not have a symmetric analogue in the limit of large aspect ratio and finite rotational transform. Thus, it is not clear whether some invariant structure of the configuration period exists in the limit of negligible toroidal effect and what are the best possible parameters for it. In the present paper the results of an optimization of the configuration with N = 12 number of periods are presented. Such properties as fast-particle confinement, effective ripple, structural factor of bootstrap current and MHD stability are considered. It is shown that MHD stability limit here is larger than in configurations with smaller number of periods considered earlier. Nevertheless, the toroidal effect in this configuration is still significant so that a simple increase of the number of periods and proportional growth of aspect ratio do not conserve favourable neoclassical transport and ideal local-mode stability properties. (author)

  15. Automatic trajectory measurement of large numbers of crowded objects

    Science.gov (United States)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  16. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  17. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  18. Forecasting the Number of Soil Samples Required to Reduce Remediation Cost Uncertainty

    OpenAIRE

    Demougeot-Renard, Hélène; de Fouquet, Chantal; Renard, Philippe

    2008-01-01

    Sampling scheme design is an important step in the management of polluted sites. It largely controls the accuracy of remediation cost estimates. In practice, however, sampling is seldom designed to comply with a given level of remediation cost uncertainty. In this paper, we present a new technique that allows one to estimate of the number of samples that should be taken at a given stage of investigation to reach a forecasted level of accuracy. The uncertainty is expressed both in terms of vol...

  19. The large numbers hypothesis and the Einstein theory of gravitation

    International Nuclear Information System (INIS)

    Dirac, P.A.M.

    1979-01-01

    A study of the relations between large dimensionless numbers leads to the belief that G, expressed in atomic units, varies with the epoch while the Einstein theory requires G to be constant. These two requirements can be reconciled by supposing that the Einstein theory applies with a metric that differs from the atomic metric. The theory can be developed with conservation of mass by supposing that the continual increase in the mass of the observable universe arises from a continual slowing down of the velocity of recession of the galaxies. This leads to a model of the Universe that was first proposed by Einstein and de Sitter (the E.S. model). The observations of the microwave radiation fit in with this model. The static Schwarzchild metric has to be modified to fit in with the E.S. model for large r. The modification is worked out, and also the motion of planets with the new metric. It is found that there is a difference between ephemeris time and atomic time, and also that there should be an inward spiralling of the planets, referred to atomic units, superposed on the motion given by ordinary gravitational theory. These are effects that can be checked by observation, but there is no conclusive evidence up to the present. (author)

  20. A Characterization of Hypergraphs with Large Domination Number

    Directory of Open Access Journals (Sweden)

    Henning Michael A.

    2016-05-01

    Full Text Available Let H = (V, E be a hypergraph with vertex set V and edge set E. A dominating set in H is a subset of vertices D ⊆ V such that for every vertex v ∈ V \\ D there exists an edge e ∈ E for which v ∈ e and e ∩ D ≠ ∅. The domination number γ(H is the minimum cardinality of a dominating set in H. It is known [Cs. Bujtás, M.A. Henning and Zs. Tuza, Transversals and domination in uniform hypergraphs, European J. Combin. 33 (2012 62-71] that for k ≥ 5, if H is a hypergraph of order n and size m with all edges of size at least k and with no isolated vertex, then γ(H ≤ (n + ⌊(k − 3/2⌋m/(⌊3(k − 1/2⌋. In this paper, we apply a recent result of the authors on hypergraphs with large transversal number [M.A. Henning and C. Löwenstein, A characterization of hypergraphs that achieve equality in the Chvátal-McDiarmid Theorem, Discrete Math. 323 (2014 69-75] to characterize the hypergraphs achieving equality in this bound.

  1. Particle creation and Dirac's large number hypothesis; and Reply

    International Nuclear Information System (INIS)

    Canuto, V.; Adams, P.J.; Hsieh, S.H.; Tsiang, E.; Steigman, G.

    1976-01-01

    The claim made by Steigman (Nature; 261:479 (1976)), that the creation of matter as postulated by Dirac (Proc. R. Soc.; A338:439 (1974)) is unnecessary, is here shown to be incorrect. It is stated that Steigman's claim that Dirac's large Number Hypothesis (LNH) does not require particle creation is wrong because he has assumed that which he was seeking to prove, that is that rho does not contain matter creation. Steigman's claim that Dirac's LNH leads to nonsensical results in the very early Universe is superficially correct, but this only supports Dirac's contention that the LNH may not be valid in the very early Universe. In a reply Steigman points out that in Dirac's original cosmology R approximately tsup(1/3) and using this model the results and conclusions of the present author's paper do apply but using a variation chosen by Canuto et al (T approximately t) Dirac's LNH cannot apply. Additionally it is observed that a cosmological theory which only predicts the present epoch is of questionable value. (U.K.)

  2. A modified large number theory with constant G

    Science.gov (United States)

    Recami, Erasmo

    1983-03-01

    The inspiring “numerology” uncovered by Dirac, Eddington, Weyl, et al. can be explained and derived when it is slightly modified so to connect the “gravitational world” (cosmos) with the “strong world” (hadron), rather than with the electromagnetic one. The aim of this note is to show the following. In the present approach to the “Large Number Theory,” cosmos and hadrons are considered to be (finite) similar systems, so that the ratio{{bar R} / {{bar R} {bar r}} of the cosmos typical lengthbar R to the hadron typical lengthbar r is constant in time (for instance, if both cosmos and hadrons undergo an expansion/contraction cycle—according to the “cyclical bigbang” hypothesis—thenbar R andbar r can be chosen to be the maximum radii, or the average radii). As a consequence, then gravitational constant G results to be independent of time. The present note is based on work done in collaboration with P. Caldirola, G. D. Maccarrone, and M. Pavšič.

  3. Waardenburg syndrome: Novel mutations in a large Brazilian sample.

    Science.gov (United States)

    Bocángel, Magnolia Astrid Pretell; Melo, Uirá Souto; Alves, Leandro Ucela; Pardono, Eliete; Lourenço, Naila Cristina Vilaça; Marcolino, Humberto Vicente Cezar; Otto, Paulo Alberto; Mingroni-Netto, Regina Célia

    2018-06-01

    This paper deals with the molecular investigation of Waardenburg syndrome (WS) in a sample of 49 clinically diagnosed probands (most from southeastern Brazil), 24 of them having the type 1 (WS1) variant (10 familial and 14 isolated cases) and 25 being affected by the type 2 (WS2) variant (five familial and 20 isolated cases). Sequential Sanger sequencing of all coding exons of PAX3, MITF, EDN3, EDNRB, SOX10 and SNAI2 genes, followed by CNV detection by MLPA of PAX3, MITF and SOX10 genes in selected cases revealed many novel pathogenic variants. Molecular screening, performed in all patients, revealed 19 causative variants (19/49 = 38.8%), six of them being large whole-exon deletions detected by MLPA, seven (four missense and three nonsense substitutions) resulting from single nucleotide substitutions (SNV), and six representing small indels. A pair of dizygotic affected female twins presented the c.430delC variant in SOX10, but the mutation, imputed to gonadal mosaicism, was not found in their unaffected parents. At least 10 novel causative mutations, described in this paper, were found in this Brazilian sample. Copy-number-variation detected by MLPA identified the causative mutation in 12.2% of our cases, corresponding to 31.6% of all causative mutations. In the majority of cases, the deletions were sporadic, since they were not present in the parents of isolated cases. Our results, as a whole, reinforce the fact that the screening of copy-number-variants by MLPA is a powerful tool to identify the molecular cause in WS patients. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  4. The large lungs of elite swimmers: an increased alveolar number?

    Science.gov (United States)

    Armour, J; Donnelly, P M; Bye, P T

    1993-02-01

    In order to obtain further insight into the mechanisms relating to the large lung volumes of swimmers, tests of mechanical lung function, including lung distensibility (K) and elastic recoil, pulmonary diffusion capacity, and respiratory mouth pressures, together with anthropometric data (height, weight, body surface area, chest width, depth and surface area), were compared in eight elite male swimmers, eight elite male long distance athletes and eight control subjects. The differences in training profiles of each group were also examined. There was no significant difference in height between the subjects, but the swimmers were younger than both the runners and controls, and both the swimmers and controls were heavier than the runners. Of all the training variables, only the mean total distance in kilometers covered per week was significantly greater in the runners. Whether based on: (a) adolescent predicted values; or (b) adult male predicted values, swimmers had significantly increased total lung capacity ((a) 145 +/- 22%, (mean +/- SD) (b) 128 +/- 15%); vital capacity ((a) 146 +/- 24%, (b) 124 +/- 15%); and inspiratory capacity ((a) 155 +/- 33%, (b) 138 +/- 29%), but this was not found in the other two groups. Swimmers also had the largest chest surface area and chest width. Forced expiratory volume in one second (FEV1) was largest in the swimmers ((b) 122 +/- 17%) and FEV1 as a percentage of forced vital capacity (FEV1/FVC)% was similar for the three groups. Pulmonary diffusing capacity (DLCO) was also highest in the swimmers (117 +/- 18%). All of the other indices of lung function, including pulmonary distensibility (K), elastic recoil and diffusion coefficient (KCO), were similar. These findings suggest that swimmers may have achieved greater lung volumes than either runners or control subjects, not because of greater inspiratory muscle strength, or differences in height, fat free mass, alveolar distensibility, age at start of training or sternal length or

  5. A NICE approach to managing large numbers of desktop PC's

    International Nuclear Information System (INIS)

    Foster, David

    1996-01-01

    The problems of managing desktop systems are far from resolved. As we deploy increasing numbers of systems, PC's Mackintoshes and UN*X Workstations. This paper will concentrate on the solution adopted at CERN for the management of the rapidly increasing numbers of desktop PC's in use in all parts of the laboratory. (author)

  6. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  7. teaching multiplication of large positive whole numbers using ...

    African Journals Online (AJOL)

    KEY WORDS: Grating Method, History of Mathematics, Long Multiplication. ... The Wolfram mathworld (n.d.) opined that the ... A further simple random sampling was carried out to select an intact class of 40 students from each of the sampled ...

  8. Boll weevil: experimental sterilization of large numbers by fractionated irradiation

    International Nuclear Information System (INIS)

    Haynes, J.W.; Wright, J.E.; Davich, T.B.; Roberson, J.; Griffin, J.G.; Darden, E.

    1978-01-01

    Boll weevils, Anthonomus grandis grandis Boheman, 9 days after egg implantation in the larval diet were transported from the Boll Weevil Research Laboratory, Mississippi State, MS, to the Comparative Animal Research Laboratory, Oak Ridge, TN, and irradiated with 6.9 krad (test 1) or 7.2 krad (test 2) of 60 Co gamma rays delivered in 25 equal doses over 100 h. In test 1, from 600 individual pairs of T (treated) males x N (normal) females, only 114 eggs hatched from a sample of 950 eggs, and 47 adults emerged from a sample of 1042 eggs. Also, from 600 pairs of T females x N males, 6 eggs hatched of a sample of 6 eggs and 12 adults emerged from a sample of 20 eggs. In test 2, from 700 individual pairs of T males x N females, 54 eggs hatched from a sample of 1510, and 10 adults emerged from a sample of 1703 eggs. Also, in T females x N males matings, 1 egg hatched of a sample of 3, and no adults emerged from a sample of 4. Transportation and handling in the 2nd test reduced adult emergence an avg of 49%. Thus the 2 replicates in test 2 resulted in 3.4 x 10 5 and 4.3 x 10 5 irradiated weevils emerging/day for 7 days. Bacterial contamination of weevils was low

  9. Large scale sample management and data analysis via MIRACLE

    DEFF Research Database (Denmark)

    Block, Ines; List, Markus; Pedersen, Marlene Lemvig

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. In the past years the technology advanced based on improved methods and protocols concerning sample preparation and printing, antibody selection, optimization...... of staining conditions and mode of signal analysis. However, the sample management and data analysis still poses challenges because of the high number of samples, sample dilutions, customized array patterns, and various programs necessary for array construction and data processing. We developed...... a comprehensive and user-friendly web application called MIRACLE (MIcroarray R-based Analysis of Complex Lysate Experiments), which bridges the gap between sample management and array analysis by conveniently keeping track of the sample information from lysate preparation, through array construction and signal...

  10. Turbulent flows at very large Reynolds numbers: new lessons learned

    International Nuclear Information System (INIS)

    Barenblatt, G I; Prostokishin, V M; Chorin, A J

    2014-01-01

    The universal (Reynolds-number-independent) von Kármán–Prandtl logarithmic law for the velocity distribution in the basic intermediate region of a turbulent shear flow is generally considered to be one of the fundamental laws of engineering science and is taught universally in fluid mechanics and hydraulics courses. We show here that this law is based on an assumption that cannot be considered to be correct and which does not correspond to experiment. Nor is Landau's derivation of this law quite correct. In this paper, an alternative scaling law explicitly incorporating the influence of the Reynolds number is discussed, as is the corresponding drag law. The study uses the concept of intermediate asymptotics and that of incomplete similarity in the similarity parameter. Yakov Borisovich Zeldovich played an outstanding role in the development of these ideas. This work is a tribute to his glowing memory. (100th anniversary of the birth of ya b zeldovich)

  11. Chaotic scattering: the supersymmetry method for large number of channels

    International Nuclear Information System (INIS)

    Lehmann, N.; Saher, D.; Sokolov, V.V.; Sommers, H.J.

    1995-01-01

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap Γ g which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  12. Chaotic scattering: the supersymmetry method for large number of channels

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, N. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Saher, D. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sokolov, V.V. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik); Sommers, H.J. (Essen Univ. (Gesamthochschule) (Germany). Fachbereich 7 - Physik)

    1995-01-23

    We investigate a model of chaotic resonance scattering based on the random matrix approach. The hermitian part of the effective hamiltonian of resonance states is taken from the GOE whereas the amplitudes of coupling to decay channels are considered both random or fixed. A new version of the supersymmetry method is worked out to determine analytically the distribution of poles of the S-matrix in the complex energy plane as well as the mean value and two-point correlation function of its elements when the number of channels scales with the number of resonance states. Analytical formulae are compared with numerical simulations. All results obtained coincide in both models provided that the ratio m of the numbers of channels and resonances is small enough and remain qualitatively similar for larger values of m. The relation between the pole distribution and the fluctuations in scattering is discussed. It is shown in particular that the clouds of poles of the S-matrix in the complex energy plane are separated from the real axis by a finite gap [Gamma][sub g] which determines the correlation length in the scattering fluctuations and leads to the exponential asymptotics of the decay law of a complicated intermediate state. ((orig.))

  13. Gentile statistics with a large maximum occupation number

    International Nuclear Information System (INIS)

    Dai Wusheng; Xie Mi

    2004-01-01

    In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics

  14. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  15. The numbers game in wildlife conservation: changeability and framing of large mammal numbers in Zimbabwe

    NARCIS (Netherlands)

    Gandiwa, E.

    2013-01-01

    Wildlife conservation in terrestrial ecosystems requires an understanding of processes influencing population sizes. Top-down and bottom-up processes are important in large herbivore population dynamics, with strength of these processes varying spatially and temporally. However, up until

  16. It's a Girl! Random Numbers, Simulations, and the Law of Large Numbers

    Science.gov (United States)

    Goodwin, Chris; Ortiz, Enrique

    2015-01-01

    Modeling using mathematics and making inferences about mathematical situations are becoming more prevalent in most fields of study. Descriptive statistics cannot be used to generalize about a population or make predictions of what can occur. Instead, inference must be used. Simulation and sampling are essential in building a foundation for…

  17. A spinner magnetometer for large Apollo lunar samples

    Science.gov (United States)

    Uehara, M.; Gattacceca, J.; Quesnel, Y.; Lepaulard, C.; Lima, E. A.; Manfredi, M.; Rochette, P.

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10-7 Am2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  18. A spinner magnetometer for large Apollo lunar samples.

    Science.gov (United States)

    Uehara, M; Gattacceca, J; Quesnel, Y; Lepaulard, C; Lima, E A; Manfredi, M; Rochette, P

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10 -7 Am 2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  19. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  20. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  1. Sampling of charged liquid radwaste stored in large tanks

    International Nuclear Information System (INIS)

    Tchemitcheff, E.; Domage, M.; Bernard-Bruls, X.

    1995-01-01

    The final safe disposal of radwaste, in France and elsewhere, entails, for liquid effluents, their conversion to a stable solid form, hence implying their conditioning. The production of conditioned waste with the requisite quality, traceability of the characteristics of the packages produced, and safe operation of the conditioning processes, implies at least the accurate knowledge of the chemical and radiochemical properties of the effluents concerned. The problem in sampling the normally charged effluents is aggravated for effluents that have been stored for several years in very large tanks, without stirring and retrieval systems. In 1992, SGN was asked by Cogema to study the retrieval and conditioning of LL/ML chemical sludge and spent ion-exchange resins produced in the operation of the UP2 400 plant at La Hague, and stored temporarily in rectangular silos and tanks. The sampling aspect was crucial for validating the inventories, identifying the problems liable to arise in the aging of the effluents, dimensioning the retrieval systems and checking the transferability and compatibility with the downstream conditioning process. Two innovative self-contained systems were developed and built for sampling operations, positioned above the tanks concerned. Both systems have been operated in active conditions and have proved totally satisfactory for taking representative samples. Today SGN can propose industrially proven overall solutions, adaptable to the various constraints of many spent fuel cycle operators

  2. CO2 isotope analyses using large air samples collected on intercontinental flights by the CARIBIC Boeing 767

    NARCIS (Netherlands)

    Assonov, S.S.; Brenninkmeijer, C.A.M.; Koeppel, C.; Röckmann, T.

    2009-01-01

    Analytical details for 13C and 18O isotope analyses of atmospheric CO2 in large air samples are given. The large air samples of nominally 300 L were collected during the passenger aircraft-based atmospheric chemistry research project CARIBIC and analyzed for a large number of trace gases and

  3. Support, shape and number of replicate samples for tree foliage analysis

    NARCIS (Netherlands)

    Luyssaert, Sebastiaan; Mertens, Jan; Raitio, Hannu

    Many fundamental features of a sampling program are determined by the heterogeneity of the object under study and the settings for the error (α), the power (β), the effect size (ES), the number of replicate samples, and sample support, which is a feature that is often overlooked. The number of

  4. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  5. Gene coexpression measures in large heterogeneous samples using count statistics.

    Science.gov (United States)

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  6. The Application Law of Large Numbers That Predicts The Amount of Actual Loss in Insurance of Life

    Science.gov (United States)

    Tinungki, Georgina Maria

    2018-03-01

    The law of large numbers is a statistical concept that calculates the average number of events or risks in a sample or population to predict something. The larger the population is calculated, the more accurate predictions. In the field of insurance, the Law of Large Numbers is used to predict the risk of loss or claims of some participants so that the premium can be calculated appropriately. For example there is an average that of every 100 insurance participants, there is one participant who filed an accident claim, then the premium of 100 participants should be able to provide Sum Assured to at least 1 accident claim. The larger the insurance participant is calculated, the more precise the prediction of the calendar and the calculation of the premium. Life insurance, as a tool for risk spread, can only work if a life insurance company is able to bear the same risk in large numbers. Here apply what is called the law of large number. The law of large numbers states that if the amount of exposure to losses increases, then the predicted loss will be closer to the actual loss. The use of the law of large numbers allows the number of losses to be predicted better.

  7. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  8. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  9. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  10. Cosmological implications of a large complete quasar sample.

    Science.gov (United States)

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  11. Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

    Directory of Open Access Journals (Sweden)

    Huilin Huang

    2014-01-01

    Full Text Available We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

  12. A review of methods for sampling large airborne particles and associated radioactivity

    International Nuclear Information System (INIS)

    Garland, J.A.; Nicholson, K.W.

    1990-01-01

    Radioactive particles, tens of μm or more in diameter, are unlikely to be emitted directly from nuclear facilities with exhaust gas cleansing systems, but may arise in the case of an accident or where resuspension from contaminated surfaces is significant. Such particles may dominate deposition and, according to some workers, may contribute to inhalation doses. Quantitative sampling of large airborne particles is difficult because of their inertia and large sedimentation velocities. The literature describes conditions for unbiased sampling and the magnitude of sampling errors for idealised sampling inlets in steady winds. However, few air samplers for outdoor use have been assessed for adequacy of sampling. Many size selective sampling methods are found in the literature but few are suitable at the low concentrations that are often encountered in the environment. A number of approaches for unbiased sampling of large particles have been found in the literature. Some are identified as meriting further study, for application in the measurement of airborne radioactivity. (author)

  13. Characterization of General TCP Traffic under a Large Number of Flows Regime

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; La, Richard J; Makowski, Armand M

    2002-01-01

    .... Accurate traffic modeling of a large number of short-lived TCP flows is extremely difficult due to the interaction between session, transport, and network layers, and the explosion of the size...

  14. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  15. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Calculating Confidence, Uncertainty, and Numbers of Samples When Using Statistical Sampling Approaches to Characterize and Clear Contaminated Areas

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.

    2013-04-27

    This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account

  17. Crowdsourcing for large-scale mosquito (Diptera: Culicidae) sampling

    Science.gov (United States)

    Sampling a cosmopolitan mosquito (Diptera: Culicidae) species throughout its range is logistically challenging and extremely resource intensive. Mosquito control programmes and regional networks operate at the local level and often conduct sampling activities across much of North America. A method f...

  18. A large-scale cryoelectronic system for biological sample banking

    Science.gov (United States)

    Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.

    2009-11-01

    We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.

  19. Similarities between 2D and 3D convection for large Prandtl number

    Indian Academy of Sciences (India)

    2016-06-18

    RBC), we perform a compara- tive study of the spectra and fluxes of energy and entropy, and the scaling of large-scale quantities for large and infinite Prandtl numbers in two (2D) and three (3D) dimensions. We observe close ...

  20. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    Science.gov (United States)

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  1. Evidence for Knowledge of the Syntax of Large Numbers in Preschoolers

    Science.gov (United States)

    Barrouillet, Pierre; Thevenot, Catherine; Fayol, Michel

    2010-01-01

    The aim of this study was to provide evidence for knowledge of the syntax governing the verbal form of large numbers in preschoolers long before they are able to count up to these numbers. We reasoned that if such knowledge exists, it should facilitate the maintenance in short-term memory of lists of lexical primitives that constitute a number…

  2. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  3. Associations between sociodemographic, sampling and health factors and various salivary cortisol indicators in a large sample without psychopathology

    NARCIS (Netherlands)

    Vreeburg, Sophie A.; Kruijtzer, Boudewijn P.; van Pelt, Johannes; van Dyck, Richard; DeRijk, Roel H.; Hoogendijk, Witte J. G.; Smit, Johannes H.; Zitman, Frans G.; Penninx, Brenda

    Background: Cortisol levels are increasingly often assessed in large-scale psychosomatic research. Although determinants of different salivary cortisol indicators have been described, they have not yet been systematically studied within the same study with a Large sample size. Sociodemographic,

  4. Heritability of psoriasis in a large twin sample

    DEFF Research Database (Denmark)

    Lønnberg, Ann Sophie; Skov, Liselotte; Skytthe, A

    2013-01-01

    AIM: To study the concordance of psoriasis in a population-based twin sample. METHODS: Data on psoriasis in 10,725 twin pairs, 20-71 years of age, from the Danish Twin Registry was collected via a questionnaire survey. The concordance and heritability of psoriasis were estimated. RESULTS: In total...

  5. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    Science.gov (United States)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  6. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  7. Effects of the number of people on efficient capture and sample collection: A lion case study

    Directory of Open Access Journals (Sweden)

    Sam M. Ferreira

    2013-05-01

    Full Text Available Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.

  8. Effects of the number of people on efficient capture and sample collection: a lion case study.

    Science.gov (United States)

    Ferreira, Sam M; Maruping, Nkabeng T; Schoultz, Darius; Smit, Travis R

    2013-05-24

    Certain carnivore research projects and approaches depend on successful capture of individuals of interest. The number of people present at a capture site may determine success of a capture. In this study 36 lion capture cases in the Kruger National Park were used to evaluate whether the number of people present at a capture site influenced lion response rates and whether the number of people at a sampling site influenced the time it took to process the collected samples. The analyses suggest that when nine or fewer people were present, lions appeared faster at a call-up locality compared with when there were more than nine people. The number of people, however, did not influence the time it took to process the lions. It is proposed that efficient lion capturing should spatially separate capture and processing sites and minimise the number of people at a capture site.

  9. Factors associated with number of duodenal samples obtained in suspected celiac disease.

    Science.gov (United States)

    Shamban, Leonid; Sorser, Serge; Naydin, Stan; Lebwohl, Benjamin; Shukr, Mousa; Wiemann, Charlotte; Yevsyukov, Daniel; Piper, Michael H; Warren, Bradley; Green, Peter H R

    2017-12-01

     Many people with celiac disease are undiagnosed and there is evidence that insufficient duodenal samples may contribute to underdiagnosis. The aims of this study were to investigate whether more samples leads to a greater likelihood of a diagnosis of celiac disease and to elucidate factors that influence the number of samples collected.  We identified patients from two community hospitals who were undergoing duodenal biopsy for indications (as identified by International Classification of Diseases code) compatible with possible celiac disease. Three cohorts were evaluated: no celiac disease (NCD, normal villi), celiac disease (villous atrophy, Marsh score 3), and possible celiac disease (PCD, Marsh score celiac disease had a median of 4 specimens collected. The percentage of patients diagnosed with celiac disease with one sample was 0.3 % compared with 12.8 % of those with six samples ( P  = 0.001). Patient factors that positively correlated with the number of samples collected were endoscopic features, demographic details, and indication ( P  = 0.001). Endoscopist factors that positively correlated with the number of samples collected were absence of a trainee, pediatric gastroenterologist, and outpatient setting ( P  celiac disease significantly increased with six samples. Multiple factors influenced whether adequate biopsies were taken. Adherence to guidelines may increase the diagnosis rate of celiac disease.

  10. Support, shape and number of replicate samples for tree foliage analysis.

    Science.gov (United States)

    Luyssaert, Sebastiaan; Mertens, Jan; Raitio, Hannu

    2003-06-01

    Many fundamental features of a sampling program are determined by the heterogeneity of the object under study and the settings for the error (alpha), the power (beta), the effect size (ES), the number of replicate samples, and sample support, which is a feature that is often overlooked. The number of replicates, alpha, beta, ES, and sample support are interconnected. The effect of the sample support and its shape on the required number of replicate samples was investigated by means of a resampling method. The method was applied to a simulated distribution of Cd in the crown of a Salix fragilis L. tree. Increasing the dimensions of the sample support results in a decrease in the variance of the element concentration under study. Analysis of the variance is often the foundation of statistical tests, therefore, valid statistical testing requires the use of a fixed sample support during the experiment. This requirement might be difficult to meet in time-series analyses and long-term monitoring programs. Sample supports have their largest dimension in the direction with the largest heterogeneity, i.e. the direction representing the crown height, and this will give more accurate results than supports with other shapes. Taking the relationships between the sample support and the variance of the element concentrations in tree crowns into account provides guidelines for sampling efficiency in terms of precision and costs. In terms of time, the optimal support to test whether the average Cd concentration of the crown exceeds a threshold value is 0.405 m3 (alpha = 0.05, beta = 0.20, ES = 1.0 mg kg(-1) dry mass). The average weight of this support is 23 g dry mass, and 11 replicate samples need to be taken. It should be noted that in this case the optimal support applies to Cd under conditions similar to those of the simulation, but not necessarily all the examinations for this tree species, element, and hypothesis test.

  11. [Effects of sampling plot number on tree species distribution prediction under climate change].

    Science.gov (United States)

    Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu

    2013-05-01

    Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.

  12. Genetic Influences on Pulmonary Function: A Large Sample Twin Study

    DEFF Research Database (Denmark)

    Ingebrigtsen, Truls S; Thomsen, Simon F; van der Sluis, Sophie

    2011-01-01

    Heritability of forced expiratory volume in one second (FEV(1)), forced vital capacity (FVC), and peak expiratory flow (PEF) has not been previously addressed in large twin studies. We evaluated the genetic contribution to individual differences observed in FEV(1), FVC, and PEF using data from...... the largest population-based twin study on spirometry. Specially trained lay interviewers with previous experience in spirometric measurements tested 4,314 Danish twins (individuals), 46-68 years of age, in their homes using a hand-held spirometer, and their flow-volume curves were evaluated. Modern variance...

  13. Scanning tunneling spectroscopy under large current flow through the sample.

    Science.gov (United States)

    Maldonado, A; Guillamón, I; Suderow, H; Vieira, S

    2011-07-01

    We describe a method to make scanning tunneling microscopy/spectroscopy imaging at very low temperatures while driving a constant electric current up to some tens of mA through the sample. It gives a new local probe, which we term current driven scanning tunneling microscopy/spectroscopy. We show spectroscopic and topographic measurements under the application of a current in superconducting Al and NbSe(2) at 100 mK. Perspective of applications of this local imaging method includes local vortex motion experiments, and Doppler shift local density of states studies.

  14. Secret Sharing Schemes with a large number of players from Toric Varieties

    DEFF Research Database (Denmark)

    Hansen, Johan P.

    A general theory for constructing linear secret sharing schemes over a finite field $\\Fq$ from toric varieties is introduced. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. We present general methods for obtaining the reconstruction and privacy thresholds as well as conditions...... for multiplication on the associated secret sharing schemes. In particular we apply the method on certain toric surfaces. The main results are ideal linear secret sharing schemes where the number of players can be as large as $(q-1)^2-1$. We determine bounds for the reconstruction and privacy thresholds...

  15. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  16. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  17. Influence of sampling interval and number of projections on the quality of SR-XFMT reconstruction

    International Nuclear Information System (INIS)

    Deng Biao; Yu Xiaohan; Xu Hongjie

    2007-01-01

    Synchrotron Radiation based X-ray Fluorescent Microtomography (SR-XFMT) is a nondestructive technique for detecting elemental composition and distribution inside a specimen with high spatial resolution and sensitivity. In this paper, computer simulation of SR-XFMT experiment is performed. The influence of the sampling interval and the number of projections on the quality of SR-XFMT image reconstruction is analyzed. It is found that the sampling interval has greater effect on the quality of reconstruction than the number of projections. (authors)

  18. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    Energy Technology Data Exchange (ETDEWEB)

    Kupavskii, A B; Raigorodskii, A M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3 (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.

  19. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  20. ON AN EXPONENTIAL INEQUALITY AND A STRONG LAW OF LARGE NUMBERS FOR MONOTONE MEASURES

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2014-01-01

    Roč. 50, č. 5 (2014), s. 804-813 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Choquet expectation * a strong law of large numbers * exponential inequality * monotone probability Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/E/mesiar-0438052.pdf

  1. Strong Laws of Large Numbers for Arrays of Rowwise NA and LNQD Random Variables

    Directory of Open Access Journals (Sweden)

    Jiangfeng Wang

    2011-01-01

    Full Text Available Some strong laws of large numbers and strong convergence properties for arrays of rowwise negatively associated and linearly negative quadrant dependent random variables are obtained. The results obtained not only generalize the result of Hu and Taylor to negatively associated and linearly negative quadrant dependent random variables, but also improve it.

  2. The lore of large numbers: some historical background to the anthropic principle

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1981-01-01

    A description is given of how the study of numerological coincidences in physics and cosmology led first to the Large Numbers Hypothesis of Dirac and then to the suggestion of the Anthropic Principle in a variety of forms. The early history of 'coincidences' is discussed together with the work of Weyl, Eddington and Dirac. (author)

  3. The three-large-primes variant of the number field sieve

    NARCIS (Netherlands)

    S.H. Cavallar

    2002-01-01

    textabstractThe Number Field Sieve (NFS) is the asymptotically fastest known factoringalgorithm for large integers.This method was proposed by John Pollard in 1988. Sincethen several variants have been implemented with the objective of improving thesiever which is the most time consuming part of

  4. SECRET SHARING SCHEMES WITH STRONG MULTIPLICATION AND A LARGE NUMBER OF PLAYERS FROM TORIC VARIETIES

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    2017-01-01

    This article consider Massey's construction for constructing linear secret sharing schemes from toric varieties over a finite field $\\Fq$ with $q$ elements. The number of players can be as large as $(q-1)^r-1$ for $r\\geq 1$. The schemes have strong multiplication, such schemes can be utilized in ...

  5. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  6. Optimal number of coarse-grained sites in different components of large biomolecular complexes.

    Science.gov (United States)

    Sinitskiy, Anton V; Saunders, Marissa G; Voth, Gregory A

    2012-07-26

    The computational study of large biomolecular complexes (molecular machines, cytoskeletal filaments, etc.) is a formidable challenge facing computational biophysics and biology. To achieve biologically relevant length and time scales, coarse-grained (CG) models of such complexes usually must be built and employed. One of the important early stages in this approach is to determine an optimal number of CG sites in different constituents of a complex. This work presents a systematic approach to this problem. First, a universal scaling law is derived and numerically corroborated for the intensity of the intrasite (intradomain) thermal fluctuations as a function of the number of CG sites. Second, this result is used for derivation of the criterion for the optimal number of CG sites in different parts of a large multibiomolecule complex. In the zeroth-order approximation, this approach validates the empirical rule of taking one CG site per fixed number of atoms or residues in each biomolecule, previously widely used for smaller systems (e.g., individual biomolecules). The first-order corrections to this rule are derived and numerically checked by the case studies of the Escherichia coli ribosome and Arp2/3 actin filament junction. In different ribosomal proteins, the optimal number of amino acids per CG site is shown to differ by a factor of 3.5, and an even wider spread may exist in other large biomolecular complexes. Therefore, the method proposed in this paper is valuable for the optimal construction of CG models of such complexes.

  7. Calculation of large Reynolds number two-dimensional flow using discrete vortices with random walk

    International Nuclear Information System (INIS)

    Milinazzo, F.; Saffman, P.G.

    1977-01-01

    The numerical calculation of two-dimensional rotational flow at large Reynolds number is considered. The method of replacing a continuous distribution of vorticity by a finite number, N, of discrete vortices is examined, where the vortices move under their mutually induced velocities plus a random component to simulate effects of viscosity. The accuracy of the method is studied by comparison with the exact solution for the decay of a circular vortex. It is found, and analytical arguments are produced in support, that the quantitative error is significant unless N is large compared with a characteristic Reynolds number. The mutually induced velocities are calculated by both direct summation and by the ''cloud in cell'' technique. The latter method is found to produce comparable error and to be much faster

  8. Operability test report for rotary mode core sampling system number 3

    International Nuclear Information System (INIS)

    Corbett, J.E.

    1996-01-01

    This report documents the successful completion of operability testing for the Rotary Mode Core Sampling (RMCS) system number-sign 3. The Report includes the test procedure (WHC-SD-WM-OTP-174), exception resolutions, data sheets, and a test report summary

  9. A model for estimating the minimum number of offspring to sample in studies of reproductive success.

    Science.gov (United States)

    Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M

    2011-01-01

    Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.

  10. Large area synchrotron X-ray fluorescence mapping of biological samples

    International Nuclear Information System (INIS)

    Kempson, I.; Thierry, B.; Smith, E.; Gao, M.; De Jonge, M.

    2014-01-01

    Large area mapping of inorganic material in biological samples has suffered severely from prohibitively long acquisition times. With the advent of new detector technology we can now generate statistically relevant information for studying cell populations, inter-variability and bioinorganic chemistry in large specimen. We have been implementing ultrafast synchrotron-based XRF mapping afforded by the MAIA detector for large area mapping of biological material. For example, a 2.5 million pixel map can be acquired in 3 hours, compared to a typical synchrotron XRF set-up needing over 1 month of uninterrupted beamtime. Of particular focus to us is the fate of metals and nanoparticles in cells, 3D tissue models and animal tissues. The large area scanning has for the first time provided statistically significant information on sufficiently large numbers of cells to provide data on intercellular variability in uptake of nanoparticles. Techniques such as flow cytometry generally require analysis of thousands of cells for statistically meaningful comparison, due to the large degree of variability. Large area XRF now gives comparable information in a quantifiable manner. Furthermore, we can now image localised deposition of nanoparticles in tissues that would be highly improbable to 'find' by typical XRF imaging. In addition, the ultra fast nature also makes it viable to conduct 3D XRF tomography over large dimensions. This technology avails new opportunities in biomonitoring and understanding metal and nanoparticle fate ex-vivo. Following from this is extension to molecular imaging through specific anti-body targeted nanoparticles to label specific tissues and monitor cellular process or biological consequence

  11. Break down of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1995-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the true Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  12. Breakdown of the law of large numbers in Josephson junction series arrays

    International Nuclear Information System (INIS)

    Dominguez, D.; Cerdeira, H.A.

    1994-01-01

    We study underdamped Josephson junction series arrays that are globally coupled through a resistive shunting load and driven by an rf bias current. We find that they can be an experimental realization of many phenomena currently studied in globally coupled logistic maps. We find coherent, ordered, partially ordered and turbulent phases in the IV characteristics of the array. The ordered phase corresponds to giant Shapiro steps. In the turbulent phase there is a saturation of the broad band noise for a large number of junctions. This corresponds to a break down of the law of large numbers as seen in the globally coupled maps. Coexisting with this, we find an emergence of novel pseudo-steps in the IV characteristics. This effect can be experimentally distinguished from the Shapiro steps, which do not have broad band noise emission. (author). 21 refs, 5 figs

  13. The holographic dual of a Riemann problem in a large number of dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Herzog, Christopher P.; Spillane, Michael [C.N. Yang Institute for Theoretical Physics, Department of Physics and Astronomy,Stony Brook University, Stony Brook, NY 11794 (United States); Yarom, Amos [Department of Physics, Technion,Haifa 32000 (Israel)

    2016-08-22

    We study properties of a non equilibrium steady state generated when two heat baths are initially in contact with one another. The dynamics of the system we study are governed by holographic duality in a large number of dimensions. We discuss the “phase diagram” associated with the steady state, the dual, dynamical, black hole description of this problem, and its relation to the fluid/gravity correspondence.

  14. Phases of a stack of membranes in a large number of dimensions of configuration space

    Science.gov (United States)

    Borelli, M. E.; Kleinert, H.

    2001-05-01

    The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.

  15. Early stage animal hoarders: are these owners of large numbers of adequately cared for cats?

    OpenAIRE

    Ramos, D.; da Cruz, N. O.; Ellis, Sarah; Hernandez, J. A. E.; Reche-Junior, A.

    2013-01-01

    Animal hoarding is a spectrum-based condition in which hoarders are often reported to have had normal and appropriate pet-keeping habits in childhood and early adulthood. Historically, research has focused largely on well established clinical animal hoarders with little work targeted towards the onset and development of animal hoarding. This study investigated whether a Brazilian population of owners of what might typically be considered an excessive number (20 or more) of cats were more like...

  16. Field sampling, preparation procedure and plutonium analyses of large freshwater samples

    International Nuclear Information System (INIS)

    Straelberg, E.; Bjerk, T.O.; Oestmo, K.; Brittain, J.E.

    2002-01-01

    This work is part of an investigation of the mobility of plutonium in freshwater systems containing humic substances. A well-defined bog-stream system located in the catchment area of a subalpine lake, Oevre Heimdalsvatn, Norway, is being studied. During the summer of 1999, six water samples were collected from the tributary stream Lektorbekken and the lake itself. However, the analyses showed that the plutonium concentration was below the detection limit in all the samples. Therefore renewed sampling at the same sites was carried out in August 2000. The results so far are in agreement with previous analyses from the Heimdalen area. However, 100 times higher concentrations are found in the lowlands in the eastern part of Norway. The reason for this is not understood, but may be caused by differences in the concentrations of humic substances and/or the fact that the mountain areas are covered with snow for a longer period of time every year. (LN)

  17. A comment on "bats killed in large numbers at United States wind energy facilities"

    Science.gov (United States)

    Huso, Manuela M.P.; Dalthorp, Dan

    2014-01-01

    Widespread reports of bat fatalities caused by wind turbines have raised concerns about the impacts of wind power development. Reliable estimates of the total number killed and the potential effects on populations are needed, but it is crucial that they be based on sound data. In a recent BioScience article, Hayes (2013) estimated that over 600,000 bats were killed at wind turbines in the United States in 2012. The scientific errors in the analysis are numerous, with the two most serious being that the included sites constituted a convenience sample, not a representative sample, and that the individual site estimates are derived from such different methodologies that they are inherently not comparable. This estimate is almost certainly inaccurate, but whether the actual number is much smaller, much larger, or about the same is uncertain. An accurate estimate of total bat fatality is not currently possible, given the shortcomings of the available data.

  18. Loss of locality in gravitational correlators with a large number of insertions

    Science.gov (United States)

    Ghosh, Sudip; Raju, Suvrat

    2017-09-01

    We review lessons from the AdS/CFT correspondence that indicate that the emergence of locality in quantum gravity is contingent upon considering observables with a small number of insertions. Correlation functions, where the number of insertions scales with a power of the central charge of the CFT, are sensitive to nonlocal effects in the bulk theory, which arise from a combination of the effects of the bulk Gauss law and a breakdown of perturbation theory. To examine whether a similar effect occurs in flat space, we consider the scattering of massless particles in the bosonic string and the superstring in the limit, where the number of external particles, n, becomes very large. We use estimates of the volume of the Weil-Petersson moduli space of punctured Riemann surfaces to argue that string amplitudes grow factorially in this limit. We verify this factorial behavior through an extensive numerical analysis of string amplitudes at large n. Our numerical calculations rely on the observation that, in the large n limit, the string scattering amplitude localizes on the Gross-Mende saddle points, even though individual particle energies are small. This factorial growth implies the breakdown of string perturbation theory for n ˜(M/plE ) d -2 in d dimensions, where E is the typical individual particle energy. We explore the implications of this breakdown for the black hole information paradox. We show that the loss of locality suggested by this breakdown is precisely sufficient to resolve the cloning and strong subadditivity paradoxes.

  19. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  20. Determining the number of samples required for decisions concerning remedial actions at hazardous waste sites

    International Nuclear Information System (INIS)

    Skiles, J.L.; Redfearn, A.; White, R.K.

    1991-01-01

    An important consideration for every risk analyst is how many field samples should be taken so that scientifically defensible decisions concerning the need for remediation of a hazardous waste site can be made. Since any plausible remedial action alternative must, at a minimum, satisfy the condition of protectiveness of human and environmental health, we propose a risk-based approach for determining the number of samples to take during remedial investigations rather than using more traditional approaches such as considering background levels of contamination or federal or state cleanup standards

  1. A full picture of large lepton number asymmetries of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, Gabriela [Departament de Física Teòrica and IFIC, Universitat de València-CSIC, C/ Dr. Moliner, 50, Burjassot, E-46100 Spain (Spain); Park, Wan-Il, E-mail: Gabriela.Barenboim@uv.es, E-mail: wipark@jbnu.ac.kr [Department of Science Education (Physics), Chonbuk National University, 567 Baekje-daero, Jeonju, 561-756 (Korea, Republic of)

    2017-04-01

    A large lepton number asymmetry of O(0.1−1) at present Universe might not only be allowed but also necessary for consistency among cosmological data. We show that, if a sizeable lepton number asymmetry were produced before the electroweak phase transition, the requirement for not producing too much baryon number asymmetry through sphalerons processes, forces the high scale lepton number asymmetry to be larger than about 03. Therefore a mild entropy release causing O(10-100) suppression of pre-existing particle density should take place, when the background temperature of the Universe is around T = O(10{sup −2}-10{sup 2}) GeV for a large but experimentally consistent asymmetry to be present today. We also show that such a mild entropy production can be obtained by the late-time decays of the saxion, constraining the parameters of the Peccei-Quinn sector such as the mass and the vacuum expectation value of the saxion field to be m {sub φ} ∼> O(10) TeV and φ{sub 0} ∼> O(10{sup 14}) GeV, respectively.

  2. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  3. Estimating the numbers of malaria infections in blood samples using high-resolution genotyping data.

    Directory of Open Access Journals (Sweden)

    Amanda Ross

    Full Text Available People living in endemic areas often habour several malaria infections at once. High-resolution genotyping can distinguish between infections by detecting the presence of different alleles at a polymorphic locus. However the number of infections may not be accurately counted since parasites from multiple infections may carry the same allele. We use simulation to determine the circumstances under which the number of observed genotypes are likely to be substantially less than the number of infections present and investigate the performance of two methods for estimating the numbers of infections from high-resolution genotyping data. The simulations suggest that the problem is not substantial in most datasets: the disparity between the mean numbers of infections and of observed genotypes was small when there was 20 or more alleles, 20 or more blood samples, a mean number of infections of 6 or less and where the frequency of the most common allele was no greater than 20%. The issue of multiple infections carrying the same allele is unlikely to be a major component of the errors in PCR-based genotyping. Simulations also showed that, with heterogeneity in allele frequencies, the observed frequencies are not a good approximation of the true allele frequencies. The first method that we proposed to estimate the numbers of infections assumes that they are a good approximation and hence did poorly in the presence of heterogeneity. In contrast, the second method by Li et al estimates both the numbers of infections and the true allele frequencies simultaneously and produced accurate estimates of the mean number of infections.

  4. High quality copy number and genotype data from FFPE samples using Molecular Inversion Probe (MIP) microarrays

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuker; Carlton, Victoria E.H.; Karlin-Neumann, George; Sapolsky, Ronald; Zhang, Li; Moorhead, Martin; Wang, Zhigang C.; Richardson, Andrea L.; Warren, Robert; Walther, Axel; Bondy, Melissa; Sahin, Aysegul; Krahe, Ralf; Tuna, Musaffe; Thompson, Patricia A.; Spellman, Paul T.; Gray, Joe W.; Mills, Gordon B.; Faham, Malek

    2009-02-24

    A major challenge facing DNA copy number (CN) studies of tumors is that most banked samples with extensive clinical follow-up information are Formalin-Fixed Paraffin Embedded (FFPE). DNA from FFPE samples generally underperforms or suffers high failure rates compared to fresh frozen samples because of DNA degradation and cross-linking during FFPE fixation and processing. As FFPE protocols may vary widely between labs and samples may be stored for decades at room temperature, an ideal FFPE CN technology should work on diverse sample sets. Molecular Inversion Probe (MIP) technology has been applied successfully to obtain high quality CN and genotype data from cell line and frozen tumor DNA. Since the MIP probes require only a small ({approx}40 bp) target binding site, we reasoned they may be well suited to assess degraded FFPE DNA. We assessed CN with a MIP panel of 50,000 markers in 93 FFPE tumor samples from 7 diverse collections. For 38 FFPE samples from three collections we were also able to asses CN in matched fresh frozen tumor tissue. Using an input of 37 ng genomic DNA, we generated high quality CN data with MIP technology in 88% of FFPE samples from seven diverse collections. When matched fresh frozen tissue was available, the performance of FFPE DNA was comparable to that of DNA obtained from matched frozen tumor (genotype concordance averaged 99.9%), with only a modest loss in performance in FFPE. MIP technology can be used to generate high quality CN and genotype data in FFPE as well as fresh frozen samples.

  5. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  6. CHRONICITY OF DEPRESSION AND MOLECULAR MARKERS IN A LARGE SAMPLE OF HAN CHINESE WOMEN.

    Science.gov (United States)

    Edwards, Alexis C; Aggen, Steven H; Cai, Na; Bigdeli, Tim B; Peterson, Roseann E; Docherty, Anna R; Webb, Bradley T; Bacanu, Silviu-Alin; Flint, Jonathan; Kendler, Kenneth S

    2016-04-25

    Major depressive disorder (MDD) has been associated with changes in mean telomere length and mitochondrial DNA (mtDNA) copy number. This study investigates if clinical features of MDD differentially impact these molecular markers. Data from a large, clinically ascertained sample of Han Chinese women with recurrent MDD were used to examine whether symptom presentation, severity, and comorbidity were related to salivary telomere length and/or mtDNA copy number (maximum N = 5,284 for both molecular and phenotypic data). Structural equation modeling revealed that duration of longest episode was positively associated with mtDNA copy number, while earlier age of onset of most severe episode and a history of dysthymia were associated with shorter telomeres. Other factors, such as symptom presentation, family history of depression, and other comorbid internalizing disorders, were not associated with these molecular markers. Chronicity of depressive symptoms is related to more pronounced telomere shortening and increased mtDNA copy number among individuals with a history of recurrent MDD. As these molecular markers have previously been implicated in physiological aging and morbidity, individuals who experience prolonged depressive symptoms are potentially at greater risk of adverse medical outcomes. © 2016 Wiley Periodicals, Inc.

  7. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    International Nuclear Information System (INIS)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min

    2015-01-01

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments

  8. Sampling Number Effects in 2D and Range Imaging of Range-gated Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Seong-Ouk; Park, Seung-Kyu; Baik, Sung-Hoon; Cho, Jai-Wan; Jeong, Kyung-Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    In this paper, we analyzed the number effects of sampling images for making a 2D image and a range image from acquired RGI images. We analyzed the number effects of RGI images for making a 2D image and a range image using a RGI vision system. As the results, 2D image quality was not much depended on the number of sampling images but on how much well extract efficient RGI images. But, the number of RGI images was important for making a range image because range image quality was proportional to the number of RGI images. Image acquiring in a monitoring area of nuclear industry is an important function for safety inspection and preparing appropriate control plans. To overcome the non-visualization problem caused by airborne obstacle particles, vision systems should have extra-functions, such as active illumination lightening through disturbance airborne particles. One of these powerful active vision systems is a range-gated imaging system. The vision system based on the range-gated imaging system can acquire image data from raining or smoking environments. Range-gated imaging (RGI) is a direct active visualization technique using a highly sensitive image sensor and a high intensity illuminant. Currently, the range-gated imaging technique providing 2D and 3D images is one of emerging active vision technologies. The range-gated imaging system gets vision information by summing time sliced vision images. In the RGI system, a high intensity illuminant illuminates for ultra-short time and a highly sensitive image sensor is gated by ultra-short exposure time to only get the illumination light. Here, the illuminant illuminates objects by flashing strong light through airborne disturbance particles. Thus, in contrast to passive conventional vision systems, the RGI active vision technology robust for low-visibility environments.

  9. Impact factors for Reggeon-gluon transition in N=4 SYM with large number of colours

    Energy Technology Data Exchange (ETDEWEB)

    Fadin, V.S., E-mail: fadin@inp.nsk.su [Budker Institute of Nuclear Physics of SD RAS, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, 630090 Novosibirsk (Russian Federation); Fiore, R., E-mail: roberto.fiore@cs.infn.it [Dipartimento di Fisica, Università della Calabria, and Istituto Nazionale di Fisica Nucleare, Gruppo collegato di Cosenza, Arcavacata di Rende, I-87036 Cosenza (Italy)

    2014-06-27

    We calculate impact factors for Reggeon-gluon transition in supersymmetric Yang–Mills theory with four supercharges at large number of colours N{sub c}. In the next-to-leading order impact factors are not uniquely defined and must accord with BFKL kernels and energy scales. We obtain the impact factor corresponding to the kernel and the energy evolution parameter, which is invariant under Möbius transformation in momentum space, and show that it is also Möbius invariant up to terms taken into account in the BDS ansatz.

  10. Do neutron stars disprove multiplicative creation in Dirac's large number hypothesis

    International Nuclear Information System (INIS)

    Qadir, A.; Mufti, A.A.

    1980-07-01

    Dirac's cosmology, based on his large number hypothesis, took the gravitational coupling to be decreasing with time and matter to be created as the square of time. Since the effects predicted by Dirac's theory are very small, it is difficult to find a ''clean'' test for it. Here we show that the observed radiation from pulsars is inconsistent with Dirac's multiplicative creation model, in which the matter created is proportional to the density of matter already present. Of course, this discussion makes no comment on the ''additive creation'' model, or on the revised version of Dirac's theory. (author)

  11. Law of large numbers and central limit theorem for randomly forced PDE's

    CERN Document Server

    Shirikyan, A

    2004-01-01

    We consider a class of dissipative PDE's perturbed by an external random force. Under the condition that the distribution of perturbation is sufficiently non-degenerate, a strong law of large numbers (SLLN) and a central limit theorem (CLT) for solutions are established and the corresponding rates of convergence are estimated. It is also shown that the estimates obtained are close to being optimal. The proofs are based on the property of exponential mixing for the problem in question and some abstract SLLN and CLT for mixing-type Markov processes.

  12. On the Convergence and Law of Large Numbers for the Non-Euclidean Lp -Means

    Directory of Open Access Journals (Sweden)

    George Livadiotis

    2017-05-01

    Full Text Available This paper describes and proves two important theorems that compose the Law of Large Numbers for the non-Euclidean L p -means, known to be true for the Euclidean L 2 -means: Let the L p -mean estimator, which constitutes the specific functional that estimates the L p -mean of N independent and identically distributed random variables; then, (i the expectation value of the L p -mean estimator equals the mean of the distributions of the random variables; and (ii the limit N → ∞ of the L p -mean estimator also equals the mean of the distributions.

  13. Superposition of elliptic functions as solutions for a large number of nonlinear equations

    International Nuclear Information System (INIS)

    Khare, Avinash; Saxena, Avadh

    2014-01-01

    For a large number of nonlinear equations, both discrete and continuum, we demonstrate a kind of linear superposition. We show that whenever a nonlinear equation admits solutions in terms of both Jacobi elliptic functions cn(x, m) and dn(x, m) with modulus m, then it also admits solutions in terms of their sum as well as difference. We have checked this in the case of several nonlinear equations such as the nonlinear Schrödinger equation, MKdV, a mixed KdV-MKdV system, a mixed quadratic-cubic nonlinear Schrödinger equation, the Ablowitz-Ladik equation, the saturable nonlinear Schrödinger equation, λϕ 4 , the discrete MKdV as well as for several coupled field equations. Further, for a large number of nonlinear equations, we show that whenever a nonlinear equation admits a periodic solution in terms of dn 2 (x, m), it also admits solutions in terms of dn 2 (x,m)±√(m) cn (x,m) dn (x,m), even though cn(x, m)dn(x, m) is not a solution of these nonlinear equations. Finally, we also obtain superposed solutions of various forms for several coupled nonlinear equations

  14. Law of Large Numbers: the Theory, Applications and Technology-based Education.

    Science.gov (United States)

    Dinov, Ivo D; Christou, Nicolas; Gould, Robert

    2009-03-01

    Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information retention. In this paper, we describe one such innovative effort of using technological tools to expose students in probability and statistics courses to the theory, practice and usability of the Law of Large Numbers (LLN). We base our approach on integrating pedagogical instruments with the computational libraries developed by the Statistics Online Computational Resource (www.SOCR.ucla.edu). To achieve this merger we designed a new interactive Java applet and a corresponding demonstration activity that illustrate the concept and the applications of the LLN. The LLN applet and activity have common goals - to provide graphical representation of the LLN principle, build lasting student intuition and present the common misconceptions about the law of large numbers. Both the SOCR LLN applet and activity are freely available online to the community to test, validate and extend (Applet: http://socr.ucla.edu/htmls/exp/Coin_Toss_LLN_Experiment.html, and Activity: http://wiki.stat.ucla.edu/socr/index.php/SOCR_EduMaterials_Activities_LLN).

  15. Wall modeled large eddy simulations of complex high Reynolds number flows with synthetic inlet turbulence

    International Nuclear Information System (INIS)

    Patil, Sunil; Tafti, Danesh

    2012-01-01

    Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.

  16. Utilizing the International GeoSample Number Concept during ICDP Expedition COSC

    Science.gov (United States)

    Conze, Ronald; Lorenz, Henning; Ulbricht, Damian; Gorgas, Thomas; Elger, Kirsten

    2016-04-01

    The concept of the International GeoSample Number (IGSN) was introduced to uniquely identify and register geo-related sample material, and make it retrievable via electronic media (e.g., SESAR - http://www.geosamples.org/igsnabout). The general aim of the IGSN concept is to improve accessing stored sample material worldwide, enable the exact identification, its origin and provenance, and also the exact and complete citation of acquired samples throughout the literature. The ICDP expedition COSC (Collisional Orogeny in the Scandinavian Caledonides, http://cosc.icdp-online.org) prompted for the first time in ICDP's history to assign and register IGSNs during an ongoing drilling campaign. ICDP drilling expeditions are using commonly the Drilling Information System DIS (http://doi.org/10.2204/iodp.sd.4.07.2007) for the inventory of recovered sample material. During COSC IGSNs were assigned to every drill hole, core run, core section, and sample taken from core material. The original IGSN specification has been extended to achieve the required uniqueness of IGSNs with our offline-procedure. The ICDP name space indicator and the Expedition ID (5054) are forming an extended prefix (ICDP5054). For every type of sample material, an encoded sequence of characters follows. This sequence is derived from the DIS naming convention which is unique from the beginning. Thereby every ICDP expedition has an unlimited name space for IGSN assignments. This direct derivation of IGSNs from the DIS database context ensures the distinct parent-child hierarchy of the IGSNs among each other. In the case of COSC this method of inventory-keeping of all drill cores was done routinely using the ExpeditionDIS during field work and subsequent sampling party. After completing the field campaign, all sample material was transferred to the "Nationales Bohrkernlager" in Berlin-Spandau, Germany. Corresponding data was subsequently imported into the CurationDIS used at the aforementioned core storage

  17. Conformal window in QCD for large numbers of colors and flavors

    International Nuclear Information System (INIS)

    Zhitnitsky, Ariel R.

    2014-01-01

    We conjecture that the phase transitions in QCD at large number of colors N≫1 is triggered by the drastic change in the instanton density. As a result of it, all physical observables also experience some sharp modification in the θ behavior. This conjecture is motivated by the holographic model of QCD where confinement–deconfinement phase transition indeed happens precisely at temperature T=T c where θ-dependence of the vacuum energy experiences a sudden change in behavior: from N 2 cos(θ/N) at T c to cosθexp(−N) at T>T c . This conjecture is also supported by recent lattice studies. We employ this conjecture to study a possible phase transition as a function of κ≡N f /N from confinement to conformal phase in the Veneziano limit N f ∼N when number of flavors and colors are large, but the ratio κ is finite. Technically, we consider an operator which gets its expectation value solely from non-perturbative instanton effects. When κ exceeds some critical value κ>κ c the integral over instanton size is dominated by small-size instantons, making the instanton computations reliable with expected exp(−N) behavior. However, when κ c , the integral over instanton size is dominated by large-size instantons, and the instanton expansion breaks down. This regime with κ c corresponds to the confinement phase. We also compute the variation of the critical κ c (T,μ) when the temperature and chemical potential T,μ≪Λ QCD slightly vary. We also discuss the scaling (x i −x j ) −γ det in the conformal phase

  18. Vicious random walkers in the limit of a large number of walkers

    International Nuclear Information System (INIS)

    Forrester, P.J.

    1989-01-01

    The vicious random walker problem on a line is studied in the limit of a large number of walkers. The multidimensional integral representing the probability that the p walkers will survive a time t (denoted P t (p) ) is shown to be analogous to the partition function of a particular one-component Coulomb gas. By assuming the existence of the thermodynamic limit for the Coulomb gas, one can deduce asymptotic formulas for P t (p) in the large-p, large-t limit. A straightforward analysis gives rigorous asymptotic formulas for the probability that after a time t the walkers are in their initial configuration (this event is termed a reunion). Consequently, asymptotic formulas for the conditional probability of a reunion, given that all walkers survive, are derived. Also, an asymptotic formula for the conditional probability density that any walker will arrive at a particular point in time t, given that all p walkers survive, is calculated in the limit t >> p

  19. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  20. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  1. 105-DR Large sodium fire facility soil sampling data evaluation report

    International Nuclear Information System (INIS)

    Adler, J.G.

    1996-01-01

    This report evaluates the soil sampling activities, soil sample analysis, and soil sample data associated with the closure activities at the 105-DR Large Sodium Fire Facility. The evaluation compares these activities to the regulatory requirements for meeting clean closure. The report concludes that there is no soil contamination from the waste treatment activities

  2. Properties of sound attenuation around a two-dimensional underwater vehicle with a large cavitation number

    International Nuclear Information System (INIS)

    Ye Peng-Cheng; Pan Guang

    2015-01-01

    Due to the high speed of underwater vehicles, cavitation is generated inevitably along with the sound attenuation when the sound signal traverses through the cavity region around the underwater vehicle. The linear wave propagation is studied to obtain the influence of bubbly liquid on the acoustic wave propagation in the cavity region. The sound attenuation coefficient and the sound speed formula of the bubbly liquid are presented. Based on the sound attenuation coefficients with various vapor volume fractions, the attenuation of sound intensity is calculated under large cavitation number conditions. The result shows that the sound intensity attenuation is fairly small in a certain condition. Consequently, the intensity attenuation can be neglected in engineering. (paper)

  3. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  4. Large-Eddy Simulation of a High Reynolds Number Flow Around a Cylinder Including Aeroacoustic Predictions

    Science.gov (United States)

    Spyropoulos, Evangelos T.; Holmes, Bayard S.

    1997-01-01

    The dynamic subgrid-scale model is employed in large-eddy simulations of flow over a cylinder at a Reynolds number, based on the diameter of the cylinder, of 90,000. The Centric SPECTRUM(trademark) finite element solver is used for the analysis. The far field sound pressure is calculated from Lighthill-Curle's equation using the computed fluctuating pressure at the surface of the cylinder. The sound pressure level at a location 35 diameters away from the cylinder and at an angle of 90 deg with respect to the wake's downstream axis was found to have a peak value of approximately 110 db. Slightly smaller peak values were predicted at the 60 deg and 120 deg locations. A grid refinement study suggests that the dynamic model demands mesh refinement beyond that used here.

  5. System for high-voltage control detectors with large number photomultipliers

    International Nuclear Information System (INIS)

    Donskov, S.V.; Kachanov, V.A.; Mikhajlov, Yu.V.

    1985-01-01

    A simple and inexpensive on-line system for hihg-voltage control which is designed for detectors with a large number of photomultipliers is developed and manufactured. It has been developed for the GAMC type hodoscopic electromagnetic calorimeters, comprising up to 4 thousand photomultipliers. High voltage variation is performed by a high-speed potentiometer which is rotated by a microengine. Block-diagrams of computer control electronics are presented. The high-voltage control system has been used for five years in the IHEP and CERN accelerator experiments. The operation experience has shown that it is quite simple and convenient in operation. In case of about 6 thousand controlled channels in both experiments no potentiometer and microengines failures were observed

  6. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    International Nuclear Information System (INIS)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel; Cuevas, Sergio; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors

  7. Decision process in MCDM with large number of criteria and heterogeneous risk preferences

    Directory of Open Access Journals (Sweden)

    Jian Liu

    Full Text Available A new decision process is proposed to address the challenge that a large number criteria in the multi-criteria decision making (MCDM problem and the decision makers with heterogeneous risk preferences. First, from the perspective of objective data, the effective criteria are extracted based on the similarity relations between criterion values and the criteria are weighted, respectively. Second, the corresponding types of theoretic model of risk preferences expectations will be built, based on the possibility and similarity between criterion values to solve the problem for different interval numbers with the same expectation. Then, the risk preferences (Risk-seeking, risk-neutral and risk-aversion will be embedded in the decision process. Later, the optimal decision object is selected according to the risk preferences of decision makers based on the corresponding theoretic model. Finally, a new algorithm of information aggregation model is proposed based on fairness maximization of decision results for the group decision, considering the coexistence of decision makers with heterogeneous risk preferences. The scientific rationality verification of this new method is given through the analysis of real case. Keywords: Heterogeneous, Risk preferences, Fairness, Decision process, Group decision

  8. New approaches to phylogenetic tree search and their application to large numbers of protein alignments.

    Science.gov (United States)

    Whelan, Simon

    2007-10-01

    Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.

  9. Large Sample Neutron Activation Analysis: A Challenge in Cultural Heritage Studies

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Tzika, F.

    2007-01-01

    Large sample neutron activation analysis compliments and significantly extends the analytical tools available for cultural heritage and authentication studies providing unique applications of non-destructive, multi-element analysis of materials that are too precious to damage for sampling purposes, representative sampling of heterogeneous materials or even analysis of whole objects. In this work, correction factors for neutron self-shielding, gamma-ray attenuation and volume distribution of the activity in large volume samples composed of iron and ceramic material were derived. Moreover, the effect of inhomogeneity on the accuracy of the technique was examined

  10. Determining the number of samples required for decisions concerning remedial actions at hazardous waste sites

    International Nuclear Information System (INIS)

    Skiles, J.L.; Redfearn, A.; White, R.K.

    1991-01-01

    The processing of collecting, analyzing, and assessing the data needed to make to make decisions concerning the cleanup of hazardous waste sites is quite complex and often very expensive. This is due to the many elements that must be considered during remedial investigations. The decision maker must have sufficient data to determine the potential risks to human health and the environment and to verify compliance with regulatory requirements, given the availability of resources allocated for a site, and time constraints specified for the completion of the decision making process. It is desirable to simplify the remedial investigation procedure as much as possible to conserve both time and resources while, simultaneously, minimizing the probability of error associated with each decision to be made. With this in mind, it is necessary to have a practical and statistically valid technique for estimating the number of on-site samples required to ''guarantee'' that the correct decisions are made with a specified precision and confidence level. Here, we will examine existing methodologies and then develop our own approach for determining a statistically defensible sample size based on specific guidelines that have been established for the risk assessment process

  11. CRISPR transcript processing: a mechanism for generating a large number of small interfering RNAs

    Directory of Open Access Journals (Sweden)

    Djordjevic Marko

    2012-07-01

    Full Text Available Abstract Background CRISPR/Cas (Clustered Regularly Interspaced Short Palindromic Repeats/CRISPR associated sequences is a recently discovered prokaryotic defense system against foreign DNA, including viruses and plasmids. CRISPR cassette is transcribed as a continuous transcript (pre-crRNA, which is processed by Cas proteins into small RNA molecules (crRNAs that are responsible for defense against invading viruses. Experiments in E. coli report that overexpression of cas genes generates a large number of crRNAs, from only few pre-crRNAs. Results We here develop a minimal model of CRISPR processing, which we parameterize based on available experimental data. From the model, we show that the system can generate a large amount of crRNAs, based on only a small decrease in the amount of pre-crRNAs. The relationship between the decrease of pre-crRNAs and the increase of crRNAs corresponds to strong linear amplification. Interestingly, this strong amplification crucially depends on fast non-specific degradation of pre-crRNA by an unidentified nuclease. We show that overexpression of cas genes above a certain level does not result in further increase of crRNA, but that this saturation can be relieved if the rate of CRISPR transcription is increased. We furthermore show that a small increase of CRISPR transcription rate can substantially decrease the extent of cas gene activation necessary to achieve a desired amount of crRNA. Conclusions The simple mathematical model developed here is able to explain existing experimental observations on CRISPR transcript processing in Escherichia coli. The model shows that a competition between specific pre-crRNA processing and non-specific degradation determines the steady-state levels of crRNA and is responsible for strong linear amplification of crRNAs when cas genes are overexpressed. The model further shows how disappearance of only a few pre-crRNA molecules normally present in the cell can lead to a large (two

  12. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  13. Sample-based Attribute Selective AnDE for Large Data

    DEFF Research Database (Denmark)

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Henc...

  14. Detection of large numbers of novel sequences in the metatranscriptomes of complex marine microbial communities.

    Science.gov (United States)

    Gilbert, Jack A; Field, Dawn; Huang, Ying; Edwards, Rob; Li, Weizhong; Gilna, Paul; Joint, Ian

    2008-08-22

    Sequencing the expressed genetic information of an ecosystem (metatranscriptome) can provide information about the response of organisms to varying environmental conditions. Until recently, metatranscriptomics has been limited to microarray technology and random cloning methodologies. The application of high-throughput sequencing technology is now enabling access to both known and previously unknown transcripts in natural communities. We present a study of a complex marine metatranscriptome obtained from random whole-community mRNA using the GS-FLX Pyrosequencing technology. Eight samples, four DNA and four mRNA, were processed from two time points in a controlled coastal ocean mesocosm study (Bergen, Norway) involving an induced phytoplankton bloom producing a total of 323,161,989 base pairs. Our study confirms the finding of the first published metatranscriptomic studies of marine and soil environments that metatranscriptomics targets highly expressed sequences which are frequently novel. Our alternative methodology increases the range of experimental options available for conducting such studies and is characterized by an exceptional enrichment of mRNA (99.92%) versus ribosomal RNA. Analysis of corresponding metagenomes confirms much higher levels of assembly in the metatranscriptomic samples and a far higher yield of large gene families with >100 members, approximately 91% of which were novel. This study provides further evidence that metatranscriptomic studies of natural microbial communities are not only feasible, but when paired with metagenomic data sets, offer an unprecedented opportunity to explore both structure and function of microbial communities--if we can overcome the challenges of elucidating the functions of so many never-seen-before gene families.

  15. Space Situational Awareness of Large Numbers of Payloads From a Single Deployment

    Science.gov (United States)

    Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.

    2014-09-01

    The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft

  16. The development of neutron activation, sample transportation and γ-ray counting routine system for numbers of geological samples

    International Nuclear Information System (INIS)

    Shibata Shin-nosuke; Tanaka, Tsuyoshi; Minami, Masayo

    2001-01-01

    A new gamma-ray counting and data processing system for non-destructive neutron activation analysis has been set up in Radioisotope Center in Nagoya University. The system carry out gamma-ray counting, sample change and data processing automatically, and is able to keep us away from parts of complicated operations in INAA. In this study, we have arranged simple analytical procedure that makes practical works easier than previous. The concrete flow is described from the reparation of powder rock samples to gamma-ray counting and data processing by the new INAA system. Then it is run over that the analyses used two Geological Survey of Japan rock reference samples JB-1a and JG-1a in order to evaluate how the new analytical procedure give any speediness and accuracy for analyses of geological materials. Two United States Geological Survey reference samples BCR-1 and G-2 used as the standard respectively. Twenty two elements for JB-1a and 25 elements for JG-1a were analyzed, the uncertainty are <5% for Na, Sc, Fe, Co, La, Ce, Sm, Eu, Yb, Lu, Hf, Ta and Th, and of <10% for Cr, Zn, Cs, Ba, Nd, Tb and U. This system will enable us to analyze more than 1500 geologic samples per year. (author)

  17. Droplet Breakup in Asymmetric T-Junctions at Intermediate to Large Capillary Numbers

    Science.gov (United States)

    Sadr, Reza; Cheng, Way Lee

    2017-11-01

    Splitting of a parent droplet into multiple daughter droplets of desired sizes is usually desired to enhance production and investigational efficiency in microfluidic devices. This can be done in an active or passive mode depending on whether an external power sources is used or not. In this study, three-dimensional simulations were done using the Volume-of-Fluid (VOF) method to analyze droplet splitting in asymmetric T-junctions with different outlet lengths. The parent droplet is divided into two uneven portions the volumetric ratio of the daughter droplets, in theory, depends on the length ratios of the outlet branches. The study identified various breakup modes such as primary, transition, bubble and non-breakup under various flow conditions and the configuration of the T-junctions. In addition, an analysis with the primary breakup regimes were conducted to study the breakup mechanisms. The results show that the way the droplet splits in an asymmetric T-junction is different than the process in a symmetric T-junction. A model for the asymmetric breakup criteria at intermediate or large Capillary number is presented. The proposed model is an expanded version to a theoretically derived model for the symmetric droplet breakup under similar flow conditions.

  18. Growth of equilibrium structures built from a large number of distinct component types.

    Science.gov (United States)

    Hedges, Lester O; Mannige, Ranjan V; Whitelam, Stephen

    2014-09-14

    We use simple analytic arguments and lattice-based computer simulations to study the growth of structures made from a large number of distinct component types. Components possess 'designed' interactions, chosen to stabilize an equilibrium target structure in which each component type has a defined spatial position, as well as 'undesigned' interactions that allow components to bind in a compositionally-disordered way. We find that high-fidelity growth of the equilibrium target structure can happen in the presence of substantial attractive undesigned interactions, as long as the energy scale of the set of designed interactions is chosen appropriately. This observation may help explain why equilibrium DNA 'brick' structures self-assemble even if undesigned interactions are not suppressed [Ke et al. Science, 338, 1177, (2012)]. We also find that high-fidelity growth of the target structure is most probable when designed interactions are drawn from a distribution that is as narrow as possible. We use this result to suggest how to choose complementary DNA sequences in order to maximize the fidelity of multicomponent self-assembly mediated by DNA. We also comment on the prospect of growing macroscopic structures in this manner.

  19. Source of vacuum electromagnetic zero-point energy and Dirac's large numbers hypothesis

    International Nuclear Information System (INIS)

    Simaciu, I.; Dumitrescu, G.

    1993-01-01

    The stochastic electrodynamics states that zero-point fluctuation of the vacuum (ZPF) is an electromagnetic zero-point radiation with spectral density ρ(ω)=ℎω 3 / 2π 2 C 3 . Protons, free electrons and atoms are sources for this radiation. Each of them absorbs and emits energy by interacting with ZPF. At equilibrium ZPF radiation is scattered by dipoles.Scattered radiation spectral density is ρ(ω,r) ρ(ω).c.σ(ω) / 4πr 2 . Radiation of dipole spectral density of Universe is ρ ∫ 0 R nρ(ω,r)4πr 2 dr. But if σ atom P e σ=σ T then ρ ρ(ω)σ T R.n. Moreover if ρ=ρ(ω) then σ T Rn = 1. With R = G M/c 2 and σ T ≅(e 2 /m e c 2 ) 2 ∝ r e 2 then σ T .Rn 1 is equivalent to R/r e = e 2 /Gm p m e i.e. the cosmological coincidence discussed in the context of Dirac's large-numbers hypothesis. (Author)

  20. Absolute activity determinations on large volume geological samples independent of self-absorption effects

    International Nuclear Information System (INIS)

    Wilson, W.E.

    1980-01-01

    This paper describes a method for measuring the absolute activity of large volume samples by γ-spectroscopy independent of self-absorption effects using Ge detectors. The method yields accurate matrix independent results at the expense of replicative counting of the unknown sample. (orig./HP)

  1. 40 CFR 761.283 - Determination of the number of samples to collect and sample collection locations.

    Science.gov (United States)

    2010-07-01

    ...) MANUFACTURING, PROCESSING, DISTRIBUTION IN COMMERCE, AND USE PROHIBITIONS Sampling To Verify Completion of Self... cleanup verification conducted in accordance with § 761.61(a)(6), follow the procedures in paragraph (b... verification conducted in accordance with § 761.61(a)(6), follow the procedures in this section for locating...

  2. Utilization of AHWR critical facility for research and development work on large sample NAA

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Reddy, A.V.R.; Verma, S.K.; De, S.K.

    2014-01-01

    The graphite reflector position of AHWR critical facility (CF) was utilized for analysis of large size (g-kg scale) samples using internal mono standard neutron activation analysis (IM-NAA). The reactor position was characterized by cadmium ratio method using In monitor for total flux and sub cadmium to epithermal flux ratio (f). Large sample neutron activation analysis (LSNAA) work was carried out for samples of stainless steel, ancient and new clay potteries and dross. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated. Radioactive assay was carried out using high resolution gamma ray spectrometry. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. Concentrations of Au and Ag were determined in not so homogeneous three large size samples of dross. An X-Z rotary scanning unit has been installed for counting large and not so homogeneous samples. (author)

  3. On the choice of the number of samples in laser Doppler anemometry signal processing

    Science.gov (United States)

    Dios, Federico; Comeron, Adolfo; Garcia-Vizcaino, David

    2001-05-01

    The minimum number of samples that must be taken from a sinusoidal signal affected by white Gaussian noise, in order to find its frequency with a predetermined maximum error, is derived. This analysis is of interest in evaluating the performance of velocity-measurement systems based on the Doppler effect. Specifically, in laser Doppler anemometry (LDA) it is usual to receive bursts with a poor signal-to- noise ratio, yet high accuracy is required for the measurement. In recent years special attention has been paid to the problem of monitoring the temporal evolution of turbulent flows. In this kind of situation averaging or filtering the data sequences cannot be allowed: in a rapidly changing environment each one of the measurements should rather by performed within a maximum permissible error and the bursts strongly affected by noise removed. The method for velocity extraction that will be considered here is the spectral analysis through the squared discrete Fourier transform, or periodogram, of the received bursts. This paper has two parts. In the first an approximate expression for the error committed in LDA is derived and discussed. In the second a mathematical formalism for the exact calculation of the error as a function of the signal-to- noise ratio is obtained, and some universal curves for the expected error are provided. The results presented here appear to represent a fundamental limitation on the accuracy of LDA measurements, yet, to our knowledge, they have not been reported in the literature so far.

  4. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  5. 105-DR Large Sodium Fire Facility decontamination, sampling, and analysis plan

    International Nuclear Information System (INIS)

    Knaus, Z.C.

    1995-01-01

    This is the decontamination, sampling, and analysis plan for the closure activities at the 105-DR Large Sodium Fire Facility at Hanford Reservation. This document supports the 105-DR Large Sodium Fire Facility Closure Plan, DOE-RL-90-25. The 105-DR LSFF, which operated from about 1972 to 1986, was a research laboratory that occupied the former ventilation supply room on the southwest side of the 105-DR Reactor facility in the 100-D Area of the Hanford Site. The LSFF was established to investigate fire fighting and safety associated with alkali metal fires in the liquid metal fast breeder reactor facilities. The decontamination, sampling, and analysis plan identifies the decontamination procedures, sampling locations, any special handling requirements, quality control samples, required chemical analysis, and data validation needed to meet the requirements of the 105-DR Large Sodium Fire Facility Closure Plan in compliance with the Resource Conservation and Recovery Act

  6. On the chromatic number of triangle-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2002-01-01

    We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c <1/3.......We prove that, for each. fixed real number c > 1/3, the triangle-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdos and Simonovits in 1973 who pointed out that there is no such result for c

  7. Evaluation of environmental sampling methods for detection of Salmonella enterica in a large animal veterinary hospital.

    Science.gov (United States)

    Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey

    2018-04-01

    Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.

  8. An open-flow pulse ionization chamber for alpha spectrometry of large-area samples

    International Nuclear Information System (INIS)

    Johansson, L.; Roos, B.; Samuelsson, C.

    1992-01-01

    The presented open-flow pulse ionization chamber was developed to make alpha spectrometry on large-area surfaces easy. One side of the chamber is left open, where the sample is to be placed. The sample acts as a chamber wall and therby defeins the detector volume. The sample area can be as large as 400 cm 2 . To prevent air from entering the volume there is a constant gas flow through the detector, coming in at the bottom of the chamber and leaking at the sides of the sample. The method results in good energy resolution and has considerable applicability in the retrospective radon research. Alpha spectra obtained in the retrospective measurements descend from 210 Po, built up in the sample from the radon daughters recoiled into a glass surface. (au)

  9. On the chromatic number of pentagon-free graphs of large minimum degree

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2007-01-01

    We prove that, for each fixed real number c > 0, the pentagon-free graphs of minimum degree at least cn (where n is the number of vertices) have bounded chromatic number. This problem was raised by Erdős and Simonovits in 1973. A similar result holds for any other fixed odd cycle, except the tria...

  10. Evaluation of Inflammatory Markers in a Large Sample of Obstructive Sleep Apnea Patients without Comorbidities

    Directory of Open Access Journals (Sweden)

    Izolde Bouloukaki

    2017-01-01

    Full Text Available Systemic inflammation is important in obstructive sleep apnea (OSA pathophysiology and its comorbidity. We aimed to assess the levels of inflammatory biomarkers in a large sample of OSA patients and to investigate any correlation between these biomarkers with clinical and polysomnographic (PSG parameters. This was a cross-sectional study in which 2983 patients who had undergone a polysomnography for OSA diagnosis were recruited. Patients with known comorbidities were excluded. Included patients (n=1053 were grouped according to apnea-hypopnea index (AHI as mild, moderate, and severe. Patients with AHI < 5 served as controls. Demographics, PSG data, and levels of high-sensitivity C-reactive protein (hs-CRP, fibrinogen, erythrocyte sedimentation rate (ESR, and uric acid (UA were measured and compared between groups. A significant difference was found between groups in hs-CRP, fibrinogen, and UA. All biomarkers were independently associated with OSA severity and gender (p<0.05. Females had increased levels of hs-CRP, fibrinogen, and ESR (p<0.001 compared to men. In contrast, UA levels were higher in men (p<0.001. Our results suggest that inflammatory markers significantly increase in patients with OSA without known comorbidities and correlate with OSA severity. These findings may have important implications regarding OSA diagnosis, monitoring, treatment, and prognosis. This trial is registered with ClinicalTrials.gov number NCT03070769.

  11. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa.

    Science.gov (United States)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M

    2017-08-01

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for Anorexia Nervosa. Following stringent quality control procedures, we investigated whether pathogenic CNVs in regions previously implicated in psychiatric and neurodevelopmental disorders were present in AN cases. We observed two instances of the well-established pathogenic CNVs in AN cases. In addition, one case had a deletion in the 13q12 region, overlapping with a deletion reported previously in two AN cases. As a secondary aim, we also examined our sample for CNVs over 1 Mbp in size. Out of the 40 instances of such large CNVs that were not implicated previously for AN or neuropsychiatric phenotypes, two of them contained genes with previous neuropsychiatric associations, and only five of them had no associated reports in public CNV databases. Although ours is the largest study of its kind in AN, larger datasets are needed to comprehensively assess the role of CNVs in the etiology of AN.

  12. On the Behavior of ECN/RED Gateways Under a Large Number of TCP Flows: Limit Theorems

    National Research Council Canada - National Science Library

    Tinnakornsrisuphap, Peerapol; Makowski, Armand M

    2005-01-01

    .... As the number of competing flows becomes large, the asymptotic queue behavior at the gateway can be described by a simple recursion and the throughput behavior of individual TCP flows becomes asymptotically independent...

  13. Relationship of fish indices with sampling effort and land use change in a large Mediterranean river.

    Science.gov (United States)

    Almeida, David; Alcaraz-Hernández, Juan Diego; Merciai, Roberto; Benejam, Lluís; García-Berthou, Emili

    2017-12-15

    Fish are invaluable ecological indicators in freshwater ecosystems but have been less used for ecological assessments in large Mediterranean rivers. We evaluated the effects of sampling effort (transect length) on fish metrics, such as species richness and two fish indices (the new European Fish Index EFI+ and a regional index, IBICAT2b), in the mainstem of a large Mediterranean river. For this purpose, we sampled by boat electrofishing five sites each with 10 consecutive transects corresponding to a total length of 20 times the river width (European standard required by the Water Framework Directive) and we also analysed the effect of sampling area on previous surveys. Species accumulation curves and richness extrapolation estimates in general suggested that species richness was reasonably estimated with transect lengths of 10 times the river width or less. The EFI+ index was significantly affected by sampling area, both for our samplings and previous data. Surprisingly, EFI+ values in general decreased with increasing sampling area, despite the higher observed richness, likely because the expected values of metrics were higher. By contrast, the regional fish index was not dependent on sampling area, likely because it does not use a predictive model. Both fish indices, but particularly the EFI+, decreased with less forest cover percentage, even within the smaller disturbance gradient in the river type studied (mainstem of a large Mediterranean river, where environmental pressures are more general). Although the two fish-based indices are very different in terms of their development, methodology, and metrics used, they were significantly correlated and provided a similar assessment of ecological status. Our results reinforce the importance of standardization of sampling methods for bioassessment and suggest that predictive models that use sampling area as a predictor might be more affected by differences in sampling effort than simpler biotic indices. Copyright

  14. Rapid separation method for {sup 237}Np and Pu isotopes in large soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod L., E-mail: sherrod.maxwell@srs.go [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States); Culligan, Brian K.; Noyes, Gary W. [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States)

    2011-07-15

    A new rapid method for the determination of {sup 237}Np and Pu isotopes in soil and sediment samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used for large soil samples. The new soil method utilizes an acid leaching method, iron/titanium hydroxide precipitation, a lanthanum fluoride soil matrix removal step, and a rapid column separation process with TEVA Resin. The large soil matrix is removed easily and rapidly using these two simple precipitations with high chemical recoveries and effective removal of interferences. Vacuum box technology and rapid flow rates are used to reduce analytical time.

  15. The problem of large samples. An activation analysis study of electronic waste material

    International Nuclear Information System (INIS)

    Segebade, C.; Goerner, W.; Bode, P.

    2007-01-01

    Large-volume instrumental photon activation analysis (IPAA) was used for the investigation of shredded electronic waste material. Sample masses from 1 to 150 grams were analyzed to obtain an estimate of the minimum sample size to be taken to achieve a representativeness of the results which is satisfactory for a defined investigation task. Furthermore, the influence of irradiation and measurement parameters upon the quality of the analytical results were studied. Finally, the analytical data obtained from IPAA and instrumental neutron activation analysis (INAA), both carried out in a large-volume mode, were compared. Only parts of the values were found in satisfactory agreement. (author)

  16. Spatio-temporal foreshock activity during stick-slip experiments of large rock samples

    Science.gov (United States)

    Tsujimura, Y.; Kawakata, H.; Fukuyama, E.; Yamashita, F.; Xu, S.; Mizoguchi, K.; Takizawa, S.; Hirano, S.

    2016-12-01

    Foreshock activity has sometimes been reported for large earthquakes, and has been roughly classified into the following two classes. For shallow intraplate earthquakes, foreshocks occurred in the vicinity of the mainshock hypocenter (e.g., Doi and Kawakata, 2012; 2013). And for intraplate subduction earthquakes, foreshock hypocenters migrated toward the mainshock hypocenter (Kato, et al., 2012; Yagi et al., 2014). To understand how foreshocks occur, it is useful to investigate the spatio-temporal activities of foreshocks in the laboratory experiments under controlled conditions. We have conducted stick-slip experiments by using a large-scale biaxial friction apparatus at NIED in Japan (e.g., Fukuyama et al., 2014). Our previous results showed that stick-slip events repeatedly occurred in a run, but only those later events were preceded by foreshocks. Kawakata et al. (2014) inferred that the gouge generated during the run was an important key for foreshock occurrence. In this study, we proceeded to carry out stick-slip experiments of large rock samples whose interface (fault plane) is 1.5 meter long and 0.5 meter wide. After some runs to generate fault gouge between the interface. In the current experiments, we investigated spatio-temporal activities of foreshocks. We detected foreshocks from waveform records of 3D array of piezo-electric sensors. Our new results showed that more than three foreshocks (typically about twenty) had occurred during each stick-slip event, in contrast to the few foreshocks observed during previous experiments without pre-existing gouge. Next, we estimated the hypocenter locations of the stick-slip events, and found that they were located near the opposite end to the loading point. In addition, we observed a migration of foreshock hypocenters toward the hypocenter of each stick-slip event. This suggests that the foreshock activity observed in our current experiments was similar to that for the interplate earthquakes in terms of the

  17. Procedure for plutonium analysis of large (100g) soil and sediment samples

    International Nuclear Information System (INIS)

    Meadows, J.W.T.; Schweiger, J.S.; Mendoza, B.; Stone, R.

    1975-01-01

    A method for the complete dissolution of large soil or sediment samples is described. This method is in routine usage at Lawrence Livermore Laboratory for the analysis of fall-out levels of Pu in soils and sediments. Intercomparison with partial dissolution (leach) techniques shows the complete dissolution method to be superior for the determination of plutonium in a wide variety of environmental samples. (author)

  18. Numerical analysis of jet impingement heat transfer at high jet Reynolds number and large temperature difference

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2013-01-01

    was investigated at a jet Reynolds number of 1.66 × 105 and a temperature difference between jet inlet and wall of 1600 K. The focus was on the convective heat transfer contribution as thermal radiation was not included in the investigation. A considerable influence of the turbulence intensity at the jet inlet...... to about 100% were observed. Furthermore, the variation in stagnation point heat transfer was examined for jet Reynolds numbers in the range from 1.10 × 105 to 6.64 × 105. Based on the investigations, a correlation is suggested between the stagnation point Nusselt number, the jet Reynolds number......, and the turbulence intensity at the jet inlet for impinging jet flows at high jet Reynolds numbers. Copyright © 2013 Taylor and Francis Group, LLC....

  19. Multilevel systematic sampling to estimate total fruit number for yield forecasts

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Zamora, Felipe Aravena; Tellez, Camilla Potin

    2012-01-01

    procedure for unbiased estimation of fruit number for yield forecasts. In the Spring of 2009 we estimated the total number of fruit in several rows of each of 14 commercial fruit orchards growing apple (11 groves), kiwifruit (two groves), and table grapes (one grove) in central Chile. Survey times were 10...

  20. Arbitrarily large numbers of kink internal modes in inhomogeneous sine-Gordon equations

    Energy Technology Data Exchange (ETDEWEB)

    González, J.A., E-mail: jalbertgonz@yahoo.es [Department of Physics, Florida International University, Miami, FL 33199 (United States); Department of Natural Sciences, Miami Dade College, 627 SW 27th Ave., Miami, FL 33135 (United States); Bellorín, A., E-mail: alberto.bellorin@ucv.ve [Escuela de Física, Facultad de Ciencias, Universidad Central de Venezuela, Apartado Postal 47586, Caracas 1041-A (Venezuela, Bolivarian Republic of); García-Ñustes, M.A., E-mail: monica.garcia@pucv.cl [Instituto de Física, Pontificia Universidad Católica de Valparaíso, Casilla 4059 (Chile); Guerrero, L.E., E-mail: lguerre@usb.ve [Departamento de Física, Universidad Simón Bolívar, Apartado Postal 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Jiménez, S., E-mail: s.jimenez@upm.es [Departamento de Matemática Aplicada a las TT.II., E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040-Madrid (Spain); Vázquez, L., E-mail: lvazquez@fdi.ucm.es [Departamento de Matemática Aplicada, Facultad de Informática, Universidad Complutense de Madrid, 28040-Madrid (Spain)

    2017-06-28

    We prove analytically the existence of an infinite number of internal (shape) modes of sine-Gordon solitons in the presence of some inhomogeneous long-range forces, provided some conditions are satisfied. - Highlights: • We have found exact kink solutions to the perturbed sine-Gordon equation. • We have been able to study analytically the kink stability problem. • A kink equilibrated by an exponentially-localized perturbation has a finite number of oscillation modes. • A sufficiently broad equilibrating perturbation supports an infinite number of soliton internal modes.

  1. Large-eddy simulation of flow over a grooved cylinder up to transcritical Reynolds numbers

    KAUST Repository

    Cheng, W.

    2017-11-27

    We report wall-resolved large-eddy simulation (LES) of flow over a grooved cylinder up to the transcritical regime. The stretched-vortex subgrid-scale model is embedded in a general fourth-order finite-difference code discretization on a curvilinear mesh. In the present study grooves are equally distributed around the circumference of the cylinder, each of sinusoidal shape with height , invariant in the spanwise direction. Based on the two parameters, and the Reynolds number where is the free-stream velocity, the diameter of the cylinder and the kinematic viscosity, two main sets of simulations are described. The first set varies from to while fixing . We study the flow deviation from the smooth-cylinder case, with emphasis on several important statistics such as the length of the mean-flow recirculation bubble , the pressure coefficient , the skin-friction coefficient and the non-dimensional pressure gradient parameter . It is found that, with increasing at fixed , some properties of the mean flow behave somewhat similarly to changes in the smooth-cylinder flow when is increased. This includes shrinking and nearly constant minimum pressure coefficient. In contrast, while the non-dimensional pressure gradient parameter remains nearly constant for the front part of the smooth cylinder flow, shows an oscillatory variation for the grooved-cylinder case. The second main set of LES varies from to with fixed . It is found that this range spans the subcritical and supercritical regimes and reaches the beginning of the transcritical flow regime. Mean-flow properties are diagnosed and compared with available experimental data including and the drag coefficient . The timewise variation of the lift and drag coefficients are also studied to elucidate the transition among three regimes. Instantaneous images of the surface, skin-friction vector field and also of the three-dimensional Q-criterion field are utilized to further understand the dynamics of the near-surface flow

  2. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  3. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kasviki, K. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece); Stamatelatos, I.E. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece)], E-mail: ion@ipta.demokritos.gr; Yannakopoulou, E. [Institute of Physical Chemistry, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Papadopoulou, P. [Institute of Technology of Agricultural Products, NAGREF, Lycovrissi, Attikis 14123 (Greece); Kalef-Ezra, J. [Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece)

    2007-10-15

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  4. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    International Nuclear Information System (INIS)

    Kasviki, K.; Stamatelatos, I.E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-01-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv

  5. Determination of 129I in large soil samples after alkaline wet disintegration

    International Nuclear Information System (INIS)

    Bunzl, K.; Kracke, W.

    1992-01-01

    Large soil samples (up to 500 g) can conveniently be disintegrated by hydrogen peroxide in an utility tank under alkaline conditions to determine subsequently 129 I by neutron activation analysis. Interfering elements such as Br are removed already before neutron irradiation to reduce the radiation exposure of the personnel. The precision of the method is 129 I also by the combustion method. (orig.)

  6. 17 CFR Appendix B to Part 420 - Sample Large Position Report

    Science.gov (United States)

    2010-04-01

    ..., and as collateral for financial derivatives and other securities transactions $ Total Memorandum 1... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Sample Large Position Report B Appendix B to Part 420 Commodity and Securities Exchanges DEPARTMENT OF THE TREASURY REGULATIONS UNDER...

  7. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  8. Investigating sex differences in psychological predictors of snack intake among a large representative sample

    NARCIS (Netherlands)

    Adriaanse, M.A.; Evers, C.; Verhoeven, A.A.C.; de Ridder, D.T.D.

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of

  9. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  10. Is Business Failure Due to Lack of Effort? Empirical Evidence from a Large Administrative Sample

    NARCIS (Netherlands)

    Ejrnaes, M.; Hochguertel, S.

    2013-01-01

    Does insurance provision reduce entrepreneurs' effort to avoid business failure? We exploit unique features of the voluntary Danish unemployment insurance (UI) scheme, that is available to the self-employed. Using a large sample of self-employed individuals, we estimate the causal effect of

  11. Determinants of salivary evening alpha-amylase in a large sample free of psychopathology

    NARCIS (Netherlands)

    Veen, Gerthe; Giltay, Erik J.; Vreeburg, Sophie A.; Licht, Carmilla M. M.; Cobbaert, Christa M.; Zitman, Frans G.; Penninx, Brenda W. J. H.

    Objective: Recently, salivary alpha-amylase (sAA) has been proposed as a suitable index for sympathetic activity and dysregulation of the autonomic nervous system (ANS). Although determinants of sAA have been described, they have not been studied within the same study with a large sample size

  12. Psychometric Properties of the Penn State Worry Questionnaire for Children in a Large Clinical Sample

    Science.gov (United States)

    Pestle, Sarah L.; Chorpita, Bruce F.; Schiffman, Jason

    2008-01-01

    The Penn State Worry Questionnaire for Children (PSWQ-C; Chorpita, Tracey, Brown, Collica, & Barlow, 1997) is a 14-item self-report measure of worry in children and adolescents. Although the PSWQ-C has demonstrated favorable psychometric properties in small clinical and large community samples, this study represents the first psychometric…

  13. Feasibility studies on large sample neutron activation analysis using a low power research reactor

    International Nuclear Information System (INIS)

    Gyampo, O.

    2008-06-01

    Instrumental neutron activation analysis (INAA) using Ghana Research Reactor-1 (GHARR-1) can be directly applied to samples with masses in grams. Samples weights were in the range of 0.5g to 5g. Therefore, the representativity of the sample is improved as well as sensitivity. Irradiation of samples was done using a low power research reactor. The correction for the neutron self-shielding within the sample is determined from measurement of the neutron flux depression just outside the sample. Correction for gamma ray self-attenuation in the sample was performed via linear attenuation coefficients derived from transmission measurements. Quantitative and qualitative analysis of data were done using gamma ray spectrometry (HPGe detector). The results of this study on the possibilities of large sample NAA using a miniature neutron source reactor (MNSR) show clearly that the Ghana Research Reactor-1 (GHARR-1) at the National Nuclear Research Institute (NNRI) can be used for sample analyses up to 5 grams (5g) using the pneumatic transfer systems.

  14. Large Eddy Simulation of an SD7003 Airfoil: Effects of Reynolds number and Subgrid-scale modeling

    DEFF Research Database (Denmark)

    Sarlak Chivaee, Hamid

    2017-01-01

    This paper presents results of a series of numerical simulations in order to study aerodynamic characteristics of the low Reynolds number Selig-Donovan airfoil, SD7003. Large Eddy Simulation (LES) technique is used for all computations at chord-based Reynolds numbers 10,000, 24,000 and 60...... the Reynolds number, and the effect is visible even at a relatively low chord-Reynolds number of 60,000. Among the tested models, the dynamic Smagorinsky gives the poorest predictions of the flow, with overprediction of lift and a larger separation on airfoils suction side. Among various models, the implicit...

  15. A large-scale survey of genetic copy number variations among Han Chinese residing in Taiwan

    Directory of Open Access Journals (Sweden)

    Wu Jer-Yuarn

    2008-12-01

    Full Text Available Abstract Background Copy number variations (CNVs have recently been recognized as important structural variations in the human genome. CNVs can affect gene expression and thus may contribute to phenotypic differences. The copy number inferring tool (CNIT is an effective hidden Markov model-based algorithm for estimating allele-specific copy number and predicting chromosomal alterations from single nucleotide polymorphism microarrays. The CNIT algorithm, which was constructed using data from 270 HapMap multi-ethnic individuals, was applied to identify CNVs from 300 unrelated Han Chinese individuals in Taiwan. Results Using stringent selection criteria, 230 regions with variable copy numbers were identified in the Han Chinese population; 133 (57.83% had been reported previously, 64 displayed greater than 1% CNV allele frequency. The average size of the CNV regions was 322 kb (ranging from 1.48 kb to 5.68 Mb and covered a total of 2.47% of the human genome. A total of 196 of the CNV regions were simple deletions and 27 were simple amplifications. There were 449 genes and 5 microRNAs within these CNV regions; some of these genes are known to be associated with diseases. Conclusion The identified CNVs are characteristic of the Han Chinese population and should be considered when genetic studies are conducted. The CNV distribution in the human genome is still poorly characterized, and there is much diversity among different ethnic populations.

  16. Fast concentration of dissolved forms of cesium radioisotopes from large seawater samples

    International Nuclear Information System (INIS)

    Jan Kamenik; Henrieta Dulaiova; Ferdinand Sebesta; Kamila St'astna; Czech Technical University, Prague

    2013-01-01

    The method developed for cesium concentration from large freshwater samples was tested and adapted for analysis of cesium radionuclides in seawater. Concentration of dissolved forms of cesium in large seawater samples (about 100 L) was performed using composite absorbers AMP-PAN and KNiFC-PAN with ammonium molybdophosphate and potassium–nickel hexacyanoferrate(II) as active components, respectively, and polyacrylonitrile as a binding polymer. A specially designed chromatography column with bed volume (BV) 25 mL allowed fast flow rates of seawater (up to 1,200 BV h -1 ). The recovery yields were determined by ICP-MS analysis of stable cesium added to seawater sample. Both absorbers proved usability for cesium concentration from large seawater samples. KNiFC-PAN material was slightly more effective in cesium concentration from acidified seawater (recovery yield around 93 % for 700 BV h -1 ). This material showed similar efficiency in cesium concentration also from natural seawater. The activity concentrations of 137 Cs determined in seawater from the central Pacific Ocean were 1.5 ± 0.1 and 1.4 ± 0.1 Bq m -3 for an offshore (January 2012) and a coastal (February 2012) locality, respectively, 134 Cs activities were below detection limit ( -3 ). (author)

  17. Development of Large Sample Neutron Activation Technique for New Applications in Thailand

    International Nuclear Information System (INIS)

    Laoharojanaphand, S.; Tippayakul, C.; Wonglee, S.; Channuie, J.

    2018-01-01

    The development of the Large Sample Neutron Activation Analysis (LSNAA) in Thailand is presented in this paper. The technique had been firstly developed with rice sample as the test subject. The Thai Research Reactor-1/Modification 1 (TRR-1/M1) was used as the neutron source. The first step was to select and characterize an appropriate irradiation facility for the research. An out-core irradiation facility (A4 position) was first attempted. The results performed with the A4 facility were then used as guides for the subsequent experiments with the thermal column facility. The characterization of the thermal column was performed with Cu-wire to determine spatial distribution without and with rice sample. The flux depression without rice sample was observed to be less than 30% while the flux depression with rice sample increased to within 60%. The flux monitors internal to the rice sample were used to determine average flux over the rice sample. The gamma selfshielding effect during gamma measurement was corrected using the Monte Carlo simulation. The ratio between the efficiencies of the volume source and the point source for each energy point was calculated by the MCNPX code. The research team adopted the k0-NAA methodology to calculate the element concentration in the research. The k0-NAA program which developed by IAEA was set up to simulate the conditions of the irradiation and measurement facilities used in this research. The element concentrations in the bulk rice sample were then calculated taking into account the flux depression and gamma efficiency corrections. At the moment, the results still show large discrepancies with the reference values. However, more research on the validation will be performed to identify sources of errors. Moreover, this LS-NAA technique was introduced for the activation analysis of the IAEA archaeological mock-up. The results are provided in this report. (author)

  18. Q-factorial Gorenstein toric Fano varieties with large Picard number

    DEFF Research Database (Denmark)

    Nill, Benjamin; Øbro, Mikkel

    2010-01-01

    In dimension $d$, ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $\\rho_X$ correspond to simplicial reflexive polytopes with $\\rho_X + d$ vertices. Casagrande showed that any $d$-dimensional simplicial reflexive polytope has at most $3 d$ and $3d-1$ vertices if $d......$ is even and odd, respectively. Moreover, for $d$ even there is up to unimodular equivalence only one such polytope with $3 d$ vertices, corresponding to the product of $d/2$ copies of a del Pezzo surface of degree six. In this paper we completely classify all $d$-dimensional simplicial reflexive polytopes...... having $3d-1$ vertices, corresponding to $d$-dimensional ${\\boldsymbol Q}$-factorial Gorenstein toric Fano varieties with Picard number $2d-1$. For $d$ even, there exist three such varieties, with two being singular, while for $d > 1$ odd there exist precisely two, both being nonsingular toric fiber...

  19. Large scale Direct Numerical Simulation of premixed turbulent jet flames at high Reynolds number

    Science.gov (United States)

    Attili, Antonio; Luca, Stefano; Lo Schiavo, Ermanno; Bisetti, Fabrizio; Creta, Francesco

    2016-11-01

    A set of direct numerical simulations of turbulent premixed jet flames at different Reynolds and Karlovitz numbers is presented. The simulations feature finite rate chemistry with 16 species and 73 reactions and up to 22 Billion grid points. The jet consists of a methane/air mixture with equivalence ratio ϕ = 0 . 7 and temperature varying between 500 and 800 K. The temperature and species concentrations in the coflow correspond to the equilibrium state of the burnt mixture. All the simulations are performed at 4 atm. The flame length, normalized by the jet width, decreases significantly as the Reynolds number increases. This is consistent with an increase of the turbulent flame speed due to the increased integral scale of turbulence. This behavior is typical of flames in the thin-reaction zone regime, which are affected by turbulent transport in the preheat layer. Fractal dimension and topology of the flame surface, statistics of temperature gradients, and flame structure are investigated and the dependence of these quantities on the Reynolds number is assessed.

  20. Efficient high speed communications over electrical powerlines for a large number of users

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Tripathi, K.; Latchman, H.A. [Florida Univ., Gainesville, FL (United States). Dept. of Electrical and Computer Engineering

    2007-07-01

    Affordable broadband Internet communication is currently available for residential use via cable modem and other forms of digital subscriber lines (DSL). Powerline communication (PLC) systems were never considered seriously for communications due to their low speed and high development cost. However, due to technological advances PLCs are now spreading to local area networks and broadband over power line systems. This paper presented a newly proposed modification to the standard HomePlug 1.0 MAC protocol to make it a constant contention window-based scheme. The HomePlug 1.0 was developed based on orthogonal frequency division multiplexing (OFDM) and carrier sense multiple access with collision avoidance (CSMA/CA). It is currently the most commonly used technology of power line communications, supporting a transmission rate of up to 14 Mbps on the power line. However, the throughput performance of this original scheme becomes critical when the number of users increases. For that reason, a constant contention window based medium access control protocol algorithm of HomePlug 1.0 was proposed under the assumption that the number of active stations is known. An analytical framework based on Markov Chains was developed in order to model this modified protocol under saturation conditions. Modeling results accurately matched the actual performance of the system. This paper revealed that the performance can be improved significantly if the variables were parameterized in terms of the number of active stations. 15 refs., 1 tab., 6 figs.

  1. Detailed Measurements of Rayleigh-Taylor Mixing at Large and Small Atwood Numbers

    International Nuclear Information System (INIS)

    Malcolm, J.; Andrews, Ph.D.

    2004-01-01

    This project has two major tasks: Task 1. The construction of a new air/helium facility to collect detailed measurements of Rayleigh-Taylor (RT) mixing at high Atwood number, and the distribution of these data to LLNL, LANL, and Alliance members for code validation and design purposes. Task 2. The collection of initial condition data from the new Air/Helium facility, for use with validation of RT simulation codes at LLNL and LANL. Also, studies of multi-layer mixing with the existing water channel facility. Over the last twelve (12) months there has been excellent progress, detailed in this report, with both tasks. As of December 10, 2004, the air/helium facility is now complete and extensive testing and validation of diagnostics has been performed. Currently experiments with air/helium up to Atwood numbers of 0.25 (the maximum is 0.75, but the highest Reynolds numbers are at 0.25) are being performed. The progress matches the project plan, as does the budget, and we expect this to continue for 2005. With interest expressed from LLNL we have continued with initial condition studies using the water channel. This work has also progressed well, with one of the graduate Research Assistants (Mr. Nick Mueschke) visiting LLNL the past two summers to work with Dr. O. Schilling. Several journal papers are in preparation that describe the work. Two MSc.'s have been completed (Mr. Nick Mueschke, and Mr. Wayne Kraft, 12/1/03). Nick and Wayne are both pursuing Ph.D.s' funded by this DOE Alliances project. Presently three (3) Ph.D. graduate Research Assistants are supported on the project, and two (2) undergraduate Research Assistants. During the year two (2) journal papers and two (2) conference papers have been published, ten (10) presentations made at conferences, and three (3) invited presentations

  2. Mapping Ad Hoc Communications Network of a Large Number Fixed-Wing UAV Swarm

    Science.gov (United States)

    2017-03-01

    shows like "Agents of S.H.I.E.L.D". Inspiration can come from the imaginative minds of people or from the world around us. Swarms have demonstrated a...high degree of success. Bees , ants, termites, and naked mole rats maintain large groups that distribute tasks among individuals in order to achieve...the application layer and not the transport layer. Real- world vehicle-to-vehicle packet delivery rates for the 50-UAV swarm event were de- scribed in

  3. Analyzing the Large Number of Variables in Biomedical and Satellite Imagery

    CERN Document Server

    Good, Phillip I

    2011-01-01

    This book grew out of an online interactive offered through statcourse.com, and it soon became apparent to the author that the course was too limited in terms of time and length in light of the broad backgrounds of the enrolled students. The statisticians who took the course needed to be brought up to speed both on the biological context as well as on the specialized statistical methods needed to handle large arrays. Biologists and physicians, even though fully knowledgeable concerning the procedures used to generate microaarrays, EEGs, or MRIs, needed a full introduction to the resampling met

  4. Linear optics and projective measurements alone suffice to create large-photon-number path entanglement

    International Nuclear Information System (INIS)

    Lee, Hwang; Kok, Pieter; Dowling, Jonathan P.; Cerf, Nicolas J.

    2002-01-01

    We propose a method for preparing maximal path entanglement with a definite photon-number N, larger than two, using projective measurements. In contrast with the previously known schemes, our method uses only linear optics. Specifically, we exhibit a way of generating four-photon, path-entangled states of the form vertical bar 4,0>+ vertical bar 0,4>, using only four beam splitters and two detectors. These states are of major interest as a resource for quantum interferometric sensors as well as for optical quantum lithography and quantum holography

  5. Laboratory Study of Magnetorotational Instability and Hydrodynamic Stability at Large Reynolds Numbers

    Science.gov (United States)

    Ji, H.; Burin, M.; Schartman, E.; Goodman, J.; Liu, W.

    2006-01-01

    Two plausible mechanisms have been proposed to explain rapid angular momentum transport during accretion processes in astrophysical disks: nonlinear hydrodynamic instabilities and magnetorotational instability (MRI). A laboratory experiment in a short Taylor-Couette flow geometry has been constructed in Princeton to study both mechanisms, with novel features for better controls of the boundary-driven secondary flows (Ekman circulation). Initial results on hydrodynamic stability have shown negligible angular momentum transport in Keplerian-like flows with Reynolds numbers approaching one million, casting strong doubt on the viability of nonlinear hydrodynamic instability as a source for accretion disk turbulence.

  6. Tracing the trajectory of skill learning with a very large sample of online game players.

    Science.gov (United States)

    Stafford, Tom; Dewar, Michael

    2014-02-01

    In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.

  7. Operability test report for core sample truck number one flammable gas modifications

    International Nuclear Information System (INIS)

    Akers, J.C.

    1997-01-01

    This report primarily consists of the original test procedure used for the Operability Testing of the flammable gas modifications to Core Sample Truck No. One. Included are exceptions, resolutions, comments, and test results. This report consists of the original, completed, test procedure used for the Operability Testing of the flammable gas modifications to the Push Mode Core Sample Truck No. 1. Prior to the Acceptance/Operability test the truck No. 1 operations procedure (TO-080-503) was revised to be more consistent with the other core sample truck procedures and to include operational steps/instructions for the SR weather cover pressurization system. A draft copy of the operations procedure was used to perform the Operability Test Procedure (OTP). A Document Acceptance Review Form is included with this report (last page) indicating the draft status of the operations procedure during the OTP. During the OTP 11 test exceptions were encountered. Of these exceptions four were determined to affect Acceptance Criteria as listed in the OTP, Section 4.7 ACCEPTANCE CRITERIA

  8. Dam risk reduction study for a number of large tailings dams in Ontario

    Energy Technology Data Exchange (ETDEWEB)

    Verma, N. [AMEC Earth and Environmental Ltd., Mississauga, ON (Canada); Small, A. [AMEC Earth and Environmental Ltd., Fredericton, NB (Canada); Martin, T. [AMEC Earth and Environmental, Burnaby, BC (Canada); Cacciotti, D. [AMEC Earth and Environmental Ltd., Sudbury, ON (Canada); Ross, T. [Vale Inco Ltd., Sudbury, ON (Canada)

    2009-07-01

    This paper discussed a risk reduction study conducted for 10 large tailings dams located at a central tailings facility in Ontario. Located near large industrial and urban developments, the tailings dams were built using an upstream method of construction that did not involve beach compaction or the provision of under-drainage. The study provided a historical background for the dam and presented results from investigations and instrumentation data. The methods used to develop the dam configurations were discussed, and remedial measures and risk assessment measures used on the dams were reviewed. The aim of the study was to address key sources of risk, which include the presence of high pore pressures and hydraulic gradients; the potential for liquefaction; slope instability; and the potential for overtopping. A borehole investigation was conducted and piezocone probes were used to obtain continuous data and determine soil and groundwater conditions. The study identified that the lower portion of the dam slopes were of concern. Erosion gullies could lead to larger scale failures, and elevated pore pressures could lead to the risk of seepage breakouts. It was concluded that remedial measures are now being conducted to ensure slope stability. 6 refs., 1 tab., 6 figs.

  9. EUPAN enables pan-genome studies of a large number of eukaryotic genomes.

    Science.gov (United States)

    Hu, Zhiqiang; Sun, Chen; Lu, Kuang-Chen; Chu, Xixia; Zhao, Yue; Lu, Jinyuan; Shi, Jianxin; Wei, Chaochun

    2017-08-01

    Pan-genome analyses are routinely carried out for bacteria to interpret the within-species gene presence/absence variations (PAVs). However, pan-genome analyses are rare for eukaryotes due to the large sizes and higher complexities of their genomes. Here we proposed EUPAN, a eukaryotic pan-genome analysis toolkit, enabling automatic large-scale eukaryotic pan-genome analyses and detection of gene PAVs at a relatively low sequencing depth. In the previous studies, we demonstrated the effectiveness and high accuracy of EUPAN in the pan-genome analysis of 453 rice genomes, in which we also revealed widespread gene PAVs among individual rice genomes. Moreover, EUPAN can be directly applied to the current re-sequencing projects primarily focusing on single nucleotide polymorphisms. EUPAN is implemented in Perl, R and C ++. It is supported under Linux and preferred for a computer cluster with LSF and SLURM job scheduling system. EUPAN together with its standard operating procedure (SOP) is freely available for non-commercial use (CC BY-NC 4.0) at http://cgm.sjtu.edu.cn/eupan/index.html . ccwei@sjtu.edu.cn or jianxin.shi@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Number of deaths due to lung diseases: How large is the problem?

    International Nuclear Information System (INIS)

    Wagener, D.K.

    1990-01-01

    The importance of lung disease as an indicator of environmentally induced adverse health effects has been recognized by inclusion among the Health Objectives for the Nation. The 1990 Health Objectives for the Nation (US Department of Health and Human Services, 1986) includes an objective that there should be virtually no new cases among newly exposed workers for four preventable occupational lung diseases-asbestosis, byssinosis, silicosis, and coal workers' pneumoconiosis. This brief communication describes two types of cause-of-death statistics- underlying and multiple cause-and demonstrates the differences between the two statistics using lung disease deaths among adult men. The choice of statistic has a large impact on estimated lung disease mortality rates. The choice of statistics also may have large effect on the estimated mortality rates due to other chromic diseases thought to be environmentally mediated. Issues of comorbidity and the way causes of death are reported become important in the interpretation of these statistics. The choice of which statistic to use when comparing data from a study population with national statistics may greatly affect the interpretations of the study findings

  11. Formation of free round jets with long laminar regions at large Reynolds numbers

    Science.gov (United States)

    Zayko, Julia; Teplovodskii, Sergey; Chicherina, Anastasia; Vedeneev, Vasily; Reshmin, Alexander

    2018-04-01

    The paper describes a new, simple method for the formation of free round jets with long laminar regions by a jet-forming device of ˜1.5 jet diameters in size. Submerged jets of 0.12 m diameter at Reynolds numbers of 2000-12 560 are experimentally studied. It is shown that for the optimal regime, the laminar region length reaches 5.5 diameters for Reynolds number ˜10 000 which is not achievable for other methods of laminar jet formation. To explain the existence of the optimal regime, a steady flow calculation in the forming unit and a stability analysis of outcoming jet velocity profiles are conducted. The shortening of the laminar regions, compared with the optimal regime, is explained by the higher incoming turbulence level for lower velocities and by the increase of perturbation growth rates for larger velocities. The initial laminar regions of free jets can be used for organising air curtains for the protection of objects in medicine and technologies by creating the air field with desired properties not mixed with ambient air. Free jets with long laminar regions can also be used for detailed studies of perturbation growth and transition to turbulence in round jets.

  12. Relationship between accuracy and number of samples on statistical quantity and contour map of environmental gamma-ray dose rate. Example of random sampling

    International Nuclear Information System (INIS)

    Matsuda, Hideharu; Minato, Susumu

    2002-01-01

    The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)

  13. A Tool for Determining the Number of Contributors: Interpreting Complex, Compromised Low-Template Dna Samples

    Science.gov (United States)

    2017-09-28

    Ph.D. Catherine Grgicak Phone: (617) 638-1968 STEM Degrees: STEM Participants: RPPR Final Report as of 17-Oct-2017 Agreement Number: W911NF-14-C...to degrade into increasingly smaller fragments over time. The mechanism inducing DNA damage can include strand breakage, formation of pyrimidine...in this example ⌊(7.8⁡10−4)(48⁡103) 63⁄ ⌋ = ⌊5.94⌋ = 5. Note that 48 µL stems from the knowledge that typically 2 of 50 µL of the extract is

  14. Application of Evolution Strategies to the Design of Tracking Filters with a Large Number of Specifications

    Directory of Open Access Journals (Sweden)

    Jesús García Herrero

    2003-07-01

    Full Text Available This paper describes the application of evolution strategies to the design of interacting multiple model (IMM tracking filters in order to fulfill a large table of performance specifications. These specifications define the desired filter performance in a thorough set of selected test scenarios, for different figures of merit and input conditions, imposing hundreds of performance goals. The design problem is stated as a numeric search in the filter parameters space to attain all specifications or at least minimize, in a compromise, the excess over some specifications as much as possible, applying global optimization techniques coming from evolutionary computation field. Besides, a new methodology is proposed to integrate specifications in a fitness function able to effectively guide the search to suitable solutions. The method has been applied to the design of an IMM tracker for a real-world civil air traffic control application: the accomplishment of specifications defined for the future European ARTAS system.

  15. Jet Impingement Heat Transfer at High Reynolds Numbers and Large Density Variations

    DEFF Research Database (Denmark)

    Jensen, Michael Vincent; Walther, Jens Honore

    2010-01-01

    Jet impingement heat transfer from a round gas jet to a flat wall has been investigated numerically in a configuration with H/D=2, where H is the distance from the jet inlet to the wall and D is the jet diameter. The jet Reynolds number was 361000 and the density ratio across the wall boundary...... layer was 3.3 due to a substantial temperature difference of 1600K between jet and wall. Results are presented which indicate very high heat flux levels and it is demonstrated that the jet inlet turbulence intensity significantly influences the heat transfer results, especially in the stagnation region....... The results also show a noticeable difference in the heat transfer predictions when applying different turbulence models. Furthermore calculations were performed to study the effect of applying temperature dependent thermophysical properties versus constant properties and the effect of calculating the gas...

  16. On the strong law of large numbers for $\\varphi$-subgaussian random variables

    OpenAIRE

    Zajkowski, Krzysztof

    2016-01-01

    For $p\\ge 1$ let $\\varphi_p(x)=x^2/2$ if $|x|\\le 1$ and $\\varphi_p(x)=1/p|x|^p-1/p+1/2$ if $|x|>1$. For a random variable $\\xi$ let $\\tau_{\\varphi_p}(\\xi)$ denote $\\inf\\{a\\ge 0:\\;\\forall_{\\lambda\\in\\mathbb{R}}\\; \\ln\\mathbb{E}\\exp(\\lambda\\xi)\\le\\varphi_p(a\\lambda)\\}$; $\\tau_{\\varphi_p}$ is a norm in a space $Sub_{\\varphi_p}=\\{\\xi:\\;\\tau_{\\varphi_p}(\\xi)1$) there exist positive constants $c$ and $\\alpha$ such that for every natural number $n$ the following inequality $\\tau_{\\varphi_p}(\\sum_{i=1...

  17. Large boson number IBM calculations and their relationship to the Bohr model

    International Nuclear Information System (INIS)

    Thiamova, G.; Rowe, D.J.

    2009-01-01

    Recently, the SO(5) Clebsch-Gordan (CG) coefficients up to the seniority v max =40 were computed in floating point arithmetic (T.A. Welsh, unpublished (2008)); and, in exact arithmetic, as square roots of rational numbers (M.A. Caprio et al., to be published in Comput. Phys. Commun.). It is shown in this paper that extending the QQQ model calculations set up in the work by D.J. Rowe and G. Thiamova (Nucl. Phys. A 760, 59 (2005)) to N=v max =40 is sufficient to obtain the IBM results converged to its Bohr contraction limit. This will be done by comparing some important matrix elements in both models, by looking at the seniority decomposition of low-lying states and at the behavior of the energy and B(E2) transition strengths ratios with increasing seniority. (orig.)

  18. A comparison of three approaches to compute the effective Reynolds number of the implicit large-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)

    2016-04-12

    Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.

  19. Development of digital gamma-activation autoradiography for analysis of samples of large area

    International Nuclear Information System (INIS)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I.

    2011-01-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  20. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    Science.gov (United States)

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  1. Development of digital gamma-activation autoradiography for analysis of samples of large area

    Energy Technology Data Exchange (ETDEWEB)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I. [Russian Academy of Sciences, Moscow (Russian Federation). Vernadsky Inst. of Geochemistry and Analytical Chemistry

    2011-07-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  2. Greater number of group identifications is associated with healthier behaviour: Evidence from a Scottish community sample.

    Science.gov (United States)

    Sani, Fabio; Madhok, Vishnu; Norbury, Michael; Dugard, Pat; Wakefield, Juliet R H

    2015-09-01

    This paper investigates the interplay between group identification (i.e., the extent to which one has a sense of belonging to a social group, coupled with a sense of commonality with in-group members) and four types of health behaviour, namely physical exercise, smoking, drinking, and diet. Specifically, we propose a positive relationship between one's number of group identifications and healthy behaviour. This study is based on the Scottish portion of the data obtained for Wave 1 of the two-wave cross-national Health in Groups project. Totally 1,824 patients from five Scottish general practitioner (GP) surgeries completed the Wave 1 questionnaire in their homes. Participants completed measures of group identification, group contact, health behaviours, and demographic variables. Results demonstrate that the greater the number of social groups with which one identifies, the healthier one's behaviour on any of the four health dimensions considered. We believe our results are due to the fact that group identification will generally (1) enhance one's sense of meaning in life, thereby leading one to take more care of oneself, (2) increase one's sense of responsibility towards other in-group members, thereby enhancing one's motivation to be healthy in order to fulfil those responsibilities, and (3) increase compliance with healthy group behavioural norms. Taken together, these processes amply overcompensate for the fact that some groups with which people may identify can actually prescribe unhealthy behaviours. © 2014 The British Psychological Society.

  3. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  4. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elasti...

  5. Introduction to the spectral distribution method. Application example to the subspaces with a large number of quasi particles

    International Nuclear Information System (INIS)

    Arvieu, R.

    The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr

  6. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases

    NARCIS (Netherlands)

    Heidema, A.G.; Boer, J.M.A.; Nagelkerke, N.; Mariman, E.C.M.; A, van der D.L.; Feskens, E.J.M.

    2006-01-01

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods

  7. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    International Nuclear Information System (INIS)

    Ramirez-Munoz, J.; Salinas-Rodriguez, E.; Soria, A.; Gama-Goicochea, A.

    2011-01-01

    Graphical abstract: Display Omitted Highlights: → The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. → The leading bubble wake decreases the drag on the trailing bubble. → A new semi-analytical model for the trailing bubble's drag is presented. → The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 ≤ Re ≤ 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 ≤ Er ≤ 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  8. Hydrodynamic interaction on large-Reynolds-number aligned bubbles: Drag effects

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez-Munoz, J., E-mail: jrm@correo.azc.uam.mx [Departamento de Energia, Universidad Autonoma Metropolitana-Azcapotzalco, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico); Salinas-Rodriguez, E.; Soria, A. [Departamento de IPH, Universidad Autonoma Metropolitana-Iztapalapa, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340 Mexico D.F. (Mexico); Gama-Goicochea, A. [Centro de Investigacion en Polimeros, Marcos Achar Lobaton No. 2, Tepexpan, 55885 Acolman, Edo. de Mexico (Mexico)

    2011-07-15

    Graphical abstract: Display Omitted Highlights: > The hydrodynamic interaction of a pair aligned equal-sized bubbles is analyzed. > The leading bubble wake decreases the drag on the trailing bubble. > A new semi-analytical model for the trailing bubble's drag is presented. > The equilibrium distance between bubbles is predicted. - Abstract: The hydrodynamic interaction of two equal-sized spherical gas bubbles rising along a vertical line with a Reynolds number (Re) between 50 and 200 is analyzed. An approach to estimate the trailing bubble drag based on the search of a proper reference fluid velocity is proposed. Our main result is a new, simple semi-analytical model for the trailing bubble drag. Additionally, the equilibrium separation distance between bubbles is predicted. The proposed models agree quantitatively up to small distances between bubbles, with reported data for 50 {<=} Re {<=} 200. The relative average error for the trailing bubble drag, Er, is found to be in the range 1.1 {<=} Er {<=} 1.7, i.e., it is of the same order of the analytical predictions in the literature.

  9. KISCH / UL AND DURABLE DEVELOPMENT OF THE REGIONS THAT HAVE A LARGE NUMBER OF RELIGIOUS SETTLEMENTS

    Directory of Open Access Journals (Sweden)

    ENEA CONSTANTA

    2016-06-01

    Full Text Available We live in a world of contemporary kitsch, a world that merges authentic and false, good taste and meets often with bad taste. This phenomenon is găseseşte everywhere: in art, in literature cheap in media productions, shows, dialogues streets, in homes, in politics, in other words, in everyday life. Ksch site came directly in tourism, being identified in all forms of tourism worldwide, but especially religious tourism, pilgrimage with unexpected success in recent years. This paper makes an analysis of progressive evolution tourist traffic religion on the ability of the destination of religious tourism to remain competitive against all the problems, to attract visitors for their loyalty, to remain unique in terms of cultural and be a permanent balance with the environment, taking into account the environment religious phenomenon invaded Kisch, it disgraceful mixing dangerously with authentic spirituality. How trade, and rather Kisch's commercial components affect the environment, reflected in terms of religious tourism offer representatives highlighted based on a survey of major monastic ensembles in North Oltenia. Research objectives achieved in work followed, on the one hand the contributions and effects of the high number of visitors on the regions that hold religious sites, and on the other hand weighting and effects of commercial activity carried out in or near monastic establishments, be it genuine or kisck the respective regions. The study conducted took into account the northern region of Oltenia, and where demand for tourism is predominantly oriented exclusively practicing religious tourism

  10. Secondary organic aerosol formation from a large number of reactive man-made organic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Derwent, Richard G., E-mail: r.derwent@btopenworld.com [rdscientific, Newbury, Berkshire (United Kingdom); Jenkin, Michael E. [Atmospheric Chemistry Services, Okehampton, Devon (United Kingdom); Utembe, Steven R.; Shallcross, Dudley E. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Murrells, Tim P.; Passant, Neil R. [AEA Environment and Energy, Harwell International Business Centre, Oxon (United Kingdom)

    2010-07-15

    A photochemical trajectory model has been used to examine the relative propensities of a wide variety of volatile organic compounds (VOCs) emitted by human activities to form secondary organic aerosol (SOA) under one set of highly idealised conditions representing northwest Europe. This study applied a detailed speciated VOC emission inventory and the Master Chemical Mechanism version 3.1 (MCM v3.1) gas phase chemistry, coupled with an optimised representation of gas-aerosol absorptive partitioning of 365 oxygenated chemical reaction product species. In all, SOA formation was estimated from the atmospheric oxidation of 113 emitted VOCs. A number of aromatic compounds, together with some alkanes and terpenes, showed significant propensities to form SOA. When these propensities were folded into a detailed speciated emission inventory, 15 organic compounds together accounted for 97% of the SOA formation potential of UK man made VOC emissions and 30 emission source categories accounted for 87% of this potential. After road transport and the chemical industry, SOA formation was dominated by the solvents sector which accounted for 28% of the SOA formation potential.

  11. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent

  12. The Love of Large Numbers: A Popularity Bias in Consumer Choice.

    Science.gov (United States)

    Powell, Derek; Yu, Jingqi; DeWolf, Melissa; Holyoak, Keith J

    2017-10-01

    Social learning-the ability to learn from observing the decisions of other people and the outcomes of those decisions-is fundamental to human evolutionary and cultural success. The Internet now provides social evidence on an unprecedented scale. However, properly utilizing this evidence requires a capacity for statistical inference. We examined how people's interpretation of online review scores is influenced by the numbers of reviews-a potential indicator both of an item's popularity and of the precision of the average review score. Our task was designed to pit statistical information against social information. We modeled the behavior of an "intuitive statistician" using empirical prior information from millions of reviews posted on Amazon.com and then compared the model's predictions with the behavior of experimental participants. Under certain conditions, people preferred a product with more reviews to one with fewer reviews even though the statistical model indicated that the latter was likely to be of higher quality than the former. Overall, participants' judgments suggested that they failed to make meaningful statistical inferences.

  13. Normal zone detectors for a large number of inductively coupled coils. Revision 1

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this paper uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages. The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take place in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. The effect on accuracy of changes in the system parameters is discussed

  14. Normal zone detectors for a large number of inductively coupled coils

    International Nuclear Information System (INIS)

    Owen, E.W.; Shimer, D.W.

    1983-01-01

    In order to protect a set of inductively coupled superconducting magnets, it is necessary to locate and measure normal zone voltages that are small compared with the mutual and self-induced voltages. The method described in this report uses two sets of voltage measurements to locate and measure one or more normal zones in any number of coupled coils. One set of voltages is the outputs of bridges that balance out the self-induced voltages The other set of voltages can be the voltages across the coils, although alternatives are possible. The two sets of equations form a single combined set of equations. Each normal zone location or combination of normal zones has a set of these combined equations associated with it. It is demonstrated that the normal zone can be located and the correct set chosen, allowing determination of the size of the normal zone. Only a few operations take plae in a working detector: multiplication of a constant, addition, and simple decision-making. In many cases the detector for each coil, although weakly linked to the other detectors, can be considered to be independent. An example of the detector design is given for four coils with realistic parameters. The effect on accuracy of changes in the system parameters is discussed

  15. Heat-capacity analysis of a large number of A15-type compounds

    International Nuclear Information System (INIS)

    Junod, A.; Jarlborg, T.; Muller, J.

    1983-01-01

    We analyze the low- and medium-temperature specific heat of 25 samples based on eleven A15 binary compounds, with T/sub c/'s ranging from less than 0.015 to 18 K. Experimentally determined ''moments'' of the phonon spectra (omega-bar,omega-bar 2 ,#betta#/sub log/) are included in the analysis. Values are tabulated for T-bar/sub c/, 2 , eta, 2 >, N/sub bs/(E/sub F/), Momega-bar 2 2 , H/sub c/(0), and 2δ(0)/k/sub B/T/sub c/. We note the following: (i) The Debye temperature is generally a bad estimate of #betta#/sub log/. (ii) lambda is governed mainly by the ''electronic parameter'' eta; lambda = 0.175eta(eV/A 2 ) +- 0.2 for all A15 compounds studied. (iii) eta is proportional to the density of states at the Fermi level and this density of states agrees well with band-structure calculations of Jarlborg in Nb-based compounds. In V-based compounds, the observed bad correlation may reflect the presence of spin fluctuations. (iv) The values for the reduced gap 2δ(0)/k/sub B/T/sub c/ range from 3.4 to 4.9 and they are correlated with T/sub c//#betta#/sub log/

  16. Specific Antibodies Reacting with SV40 Large T Antigen Mimotopes in Serum Samples of Healthy Subjects.

    Directory of Open Access Journals (Sweden)

    Mauro Tognon

    Full Text Available Simian Virus 40, experimentally assayed in vitro in different animal and human cells and in vivo in rodents, was classified as a small DNA tumor virus. In previous studies, many groups identified Simian Virus 40 sequences in healthy individuals and cancer patients using PCR techniques, whereas others failed to detect the viral sequences in human specimens. These conflicting results prompted us to develop a novel indirect ELISA with synthetic peptides, mimicking Simian Virus 40 capsid viral protein antigens, named mimotopes. This immunologic assay allowed us to investigate the presence of serum antibodies against Simian Virus 40 and to verify whether Simian Virus 40 is circulating in humans. In this investigation two mimotopes from Simian Virus 40 large T antigen, the viral replication protein and oncoprotein, were employed to analyze for specific reactions to human sera antibodies. This indirect ELISA with synthetic peptides from Simian Virus 40 large T antigen was used to assay a new collection of serum samples from healthy subjects. This novel assay revealed that serum antibodies against Simian Virus 40 large T antigen mimotopes are detectable, at low titer, in healthy subjects aged from 18-65 years old. The overall prevalence of reactivity with the two Simian Virus 40 large T antigen peptides was 20%. This new ELISA with two mimotopes of the early viral regions is able to detect in a specific manner Simian Virus 40 large T antigen-antibody responses.

  17. Exploration of large, rare copy number variants associated with psychiatric and neurodevelopmental disorders in individuals with anorexia nervosa

    NARCIS (Netherlands)

    Yilmaz, Zeynep; Szatkiewicz, Jin P; Crowley, James J; Ancalade, NaEshia; Brandys, Marek K; van Elburg, Annemarie; de Kovel, Carolien G F; Adan, Roger A H; Hinney, Anke; Hebebrand, Johannes; Gratacos, Monica; Fernandez-Aranda, Fernando; Escaramis, Georgia; Gonzalez, Juan R; Estivill, Xavier; Zeggini, Eleftheria; Sullivan, Patrick F; Bulik, Cynthia M; Genetic Consortium for Anorexia Nervosa, Wellcome Trust Case Control Consortium 3

    Anorexia nervosa (AN) is a serious and heritable psychiatric disorder. To date, studies of copy number variants (CNVs) have been limited and inconclusive because of small sample sizes. We conducted a case-only genome-wide CNV survey in 1983 female AN cases included in the Genetic Consortium for

  18. Beating the numbers through strategic intervention materials (SIMs): Innovative science teaching for large classes

    Science.gov (United States)

    Alboruto, Venus M.

    2017-05-01

    The study aimed to find out the effectiveness of using Strategic Intervention Materials (SIMs) as an innovative teaching practice in managing large Grade Eight Science classes to raise the performance of the students in terms of science process skills development and mastery of science concepts. Utilizing experimental research design with two groups of participants, which were purposefully chosen, it was obtained that there existed a significant difference in the performance of the experimental and control groups based on actual class observation and written tests on science process skills with a p-value of 0.0360 in favor of the experimental class. Further, results of written pre-test and post-test on science concepts showed that the experimental group with the mean of 24.325 (SD =3.82) performed better than the control group with the mean of 20.58 (SD =4.94), with a registered p-value of 0.00039. Therefore, the use of SIMs significantly contributed to the mastery of science concepts and the development of science process skills. Based on the findings, the following recommendations are offered: 1. that grade eight science teachers should use or adopt the SIMs used in this study to improve their students' performance; 2. training-workshop on developing SIMs must be conducted to help teachers develop SIMs to be used in their classes; 3. school administrators must allocate funds for the development and reproduction of SIMs to be used by the students in their school; and 4. every division should have a repository of SIMs for easy access of the teachers in the entire division.

  19. A novel SNP analysis method to detect copy number alterations with an unbiased reference signal directly from tumor samples

    Directory of Open Access Journals (Sweden)

    LaFramboise William A

    2011-01-01

    Full Text Available Abstract Background Genomic instability in cancer leads to abnormal genome copy number alterations (CNA as a mechanism underlying tumorigenesis. Using microarrays and other technologies, tumor CNA are detected by comparing tumor sample CN to normal reference sample CN. While advances in microarray technology have improved detection of copy number alterations, the increase in the number of measured signals, noise from array probes, variations in signal-to-noise ratio across batches and disparity across laboratories leads to significant limitations for the accurate identification of CNA regions when comparing tumor and normal samples. Methods To address these limitations, we designed a novel "Virtual Normal" algorithm (VN, which allowed for construction of an unbiased reference signal directly from test samples within an experiment using any publicly available normal reference set as a baseline thus eliminating the need for an in-lab normal reference set. Results The algorithm was tested using an optimal, paired tumor/normal data set as well as previously uncharacterized pediatric malignant gliomas for which a normal reference set was not available. Using Affymetrix 250K Sty microarrays, we demonstrated improved signal-to-noise ratio and detected significant copy number alterations using the VN algorithm that were validated by independent PCR analysis of the target CNA regions. Conclusions We developed and validated an algorithm to provide a virtual normal reference signal directly from tumor samples and minimize noise in the derivation of the raw CN signal. The algorithm reduces the variability of assays performed across different reagent and array batches, methods of sample preservation, multiple personnel, and among different laboratories. This approach may be valuable when matched normal samples are unavailable or the paired normal specimens have been subjected to variations in methods of preservation.

  20. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    Science.gov (United States)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and

  1. What caused a large number of fatalities in the Tohoku earthquake?

    Science.gov (United States)

    Ando, M.; Ishida, M.; Nishikawa, Y.; Mizuki, C.; Hayashi, Y.

    2012-04-01

    The Mw9.0 earthquake caused 20,000 deaths and missing persons in northeastern Japan. 115 years prior to this event, there were three historical tsunamis that struck the region, one of which is a "tsunami earthquake" resulted with a death toll of 22,000. Since then, numerous breakwaters were constructed along the entire northeastern coasts and tsunami evacuation drills were carried out and hazard maps were distributed to local residents on numerous communities. However, despite the constructions and preparedness efforts, the March 11 Tohoku earthquake caused numerous fatalities. The strong shaking lasted three minutes or longer, thus all residents recognized that this is the strongest and longest earthquake that they had been ever experienced in their lives. The tsunami inundated an enormous area at about 560km2 over 35 cities along the coast of northeast Japan. To find out the reasons behind the high number of fatalities due to the March 11 tsunami, we interviewed 150 tsunami survivors at public evacuation shelters in 7 cities mainly in Iwate prefecture in mid-April and early June 2011. Interviews were done for about 30min or longer focused on their evacuation behaviors and those that they had observed. On the basis of the interviews, we found that residents' decisions not to evacuate immediately were partly due to or influenced by earthquake science results. Below are some of the factors that affected residents' decisions. 1. Earthquake hazard assessments turned out to be incorrect. Expected earthquake magnitudes and resultant hazards in northeastern Japan assessed and publicized by the government were significantly smaller than the actual Tohoku earthquake. 2. Many residents did not receive accurate tsunami warnings. The first tsunami warning were too small compared with the actual tsunami heights. 3. The previous frequent warnings with overestimated tsunami height influenced the behavior of the residents. 4. Many local residents above 55 years old experienced

  2. Timoides agassizii Bigelow, 1904, little-known hydromedusa (Cnidaria), appears briefly in large numbers off Oman, March 2011, with additional notes about species of the genus Timoides.

    Science.gov (United States)

    Purushothaman, Jasmine; Kharusi, Lubna Al; Mills, Claudia E; Ghielani, Hamed; Marzouki, Mohammad Al

    2013-12-11

    A bloom of the hydromedusan jellyfish, Timoides agassizii, occurred in February 2011 off the coast of Sohar, Al Batinah, Sultanate of Oman, in the Gulf of Oman. This species was first observed in 1902 in great numbers off Haddummati Atoll in the Maldive Islands in the Indian Ocean and has rarely been seen since. The species appeared briefly in large numbers off Oman in 2011 and subsequent observation of our 2009 samples of zooplankton from Sohar revealed that it was also present in low numbers (two collected) in one sample in 2009; these are the first records in the Indian Ocean north of the Maldives. Medusae collected off Oman were almost identical to those recorded previously from the Maldive Islands, Papua New Guinea, the Marshall Islands, Guam, the South China Sea, and Okinawa. T. agassizii is a species that likely lives for several months. It was present in our plankton samples together with large numbers of the oceanic siphonophore Physalia physalis only during a single month's samples, suggesting that the temporary bloom off Oman was likely due to the arrival of mature, open ocean medusae into nearshore waters. We see no evidence that T. agassizii has established a new population along Oman, since if so, it would likely have been present in more than one sample period. We are unable to deduce further details of the life cycle of this species from blooms of many mature individuals nearshore, about a century apart. Examination of a single damaged T. agassizii medusa from Guam, calls into question the existence of its congener, T. latistyla, known only from a single specimen.

  3. Presence and significant determinants of cognitive impairment in a large sample of patients with multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Martina Borghi

    Full Text Available OBJECTIVES: To investigate the presence and the nature of cognitive impairment in a large sample of patients with Multiple Sclerosis (MS, and to identify clinical and demographic determinants of cognitive impairment in MS. METHODS: 303 patients with MS and 279 healthy controls were administered the Brief Repeatable Battery of Neuropsychological tests (BRB-N; measures of pre-morbid verbal competence and neuropsychiatric measures were also administered. RESULTS: Patients and healthy controls were matched for age, gender, education and pre-morbid verbal Intelligence Quotient. Patients presenting with cognitive impairment were 108/303 (35.6%. In the overall group of participants, the significant predictors of the most sensitive BRB-N scores were: presence of MS, age, education, and Vocabulary. The significant predictors when considering MS patients only were: course of MS, age, education, vocabulary, and depression. Using logistic regression analyses, significant determinants of the presence of cognitive impairment in relapsing-remitting MS patients were: duration of illness (OR = 1.053, 95% CI = 1.010-1.097, p = 0.015, Expanded Disability Status Scale score (OR = 1.247, 95% CI = 1.024-1.517, p = 0.028, and vocabulary (OR = 0.960, 95% CI = 0.936-0.984, p = 0.001, while in the smaller group of progressive MS patients these predictors did not play a significant role in determining the cognitive outcome. CONCLUSIONS: Our results corroborate the evidence about the presence and the nature of cognitive impairment in a large sample of patients with MS. Furthermore, our findings identify significant clinical and demographic determinants of cognitive impairment in a large sample of MS patients for the first time. Implications for further research and clinical practice were discussed.

  4. Superwind Outflows in Seyfert Galaxies? : Large-Scale Radio Maps of an Edge-On Sample

    Science.gov (United States)

    Colbert, E.; Gallimore, J.; Baum, S.; O'Dea, C.

    1995-03-01

    Large-scale galactic winds (superwinds) are commonly found flowing out of the nuclear region of ultraluminous infrared and powerful starburst galaxies. Stellar winds and supernovae from the nuclear starburst provide the energy to drive these superwinds. The outflowing gas escapes along the rotation axis, sweeping up and shock-heating clouds in the halo, which produces optical line emission, radio synchrotron emission, and X-rays. These features can most easily be studied in edge-on systems, so that the wind emission is not confused by that from the disk. We have begun a systematic search for superwind outflows in Seyfert galaxies. In an earlier optical emission-line survey, we found extended minor axis emission and/or double-peaked emission line profiles in >~30% of the sample objects. We present here large-scale (6cm VLA C-config) radio maps of 11 edge-on Seyfert galaxies, selected (without bias) from a distance-limited sample of 23 edge-on Seyferts. These data have been used to estimate the frequency of occurrence of superwinds. Preliminary results indicate that four (36%) of the 11 objects observed and six (26%) of the 23 objects in the distance-limited sample have extended radio emission oriented perpendicular to the galaxy disk. This emission may be produced by a galactic wind blowing out of the disk. Two (NGC 2992 and NGC 5506) of the nine objects for which we have both radio and optical data show good evidence for a galactic wind in both datasets. We suggest that galactic winds occur in >~30% of all Seyferts. A goal of this work is to find a diagnostic that can be used to distinguish between large-scale outflows that are driven by starbursts and those that are driven by an AGN. The presence of starburst-driven superwinds in Seyferts, if established, would have important implications for the connection between starburst galaxies and AGN.

  5. Analysis of reflection-peak wavelengths of sampled fiber Bragg gratings with large chirp.

    Science.gov (United States)

    Zou, Xihua; Pan, Wei; Luo, Bin

    2008-09-10

    The reflection-peak wavelengths (RPWs) in the spectra of sampled fiber Bragg gratings with large chirp (SFBGs-LC) are theoretically investigated. Such RPWs are divided into two parts, the RPWs of equivalent uniform SFBGs (U-SFBGs) and the wavelength shift caused by the large chirp in the grating period (CGP). We propose a quasi-equivalent transform to deal with the CGP. That is, the CGP is transferred into quasi-equivalent phase shifts to directly derive the Fourier transform of the refractive index modulation. Then, in the case of both the direct and the inverse Talbot effect, the wavelength shift is obtained from the Fourier transform. Finally, the RPWs of SFBGs-LC can be achieved by combining the wavelength shift and the RPWs of equivalent U-SFBGs. Several simulations are shown to numerically confirm these predicted RPWs of SFBGs-LC.

  6. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  7. Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging

    Science.gov (United States)

    Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.

    2017-08-01

    Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.

  8. Large sample neutron activation analysis: establishment at CDTN/CNEN, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Maria Angela de B.C., E-mail: menezes@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Jacimovic, Radojko, E-mail: radojko.jacimovic@ijs.s [Jozef Stefan Institute, Ljubljana (Slovenia). Dept. of Environmental Sciences. Group for Radiochemistry and Radioecology

    2011-07-01

    In order to improve the application of the neutron activation technique at CDTN/CNEN, the large sample instrumental neutron activation analysis is being established, IAEA BRA 14798 and FAPEMIG APQ-01259-09 projects. This procedure, LS-INAA, usually requires special facilities for the activation as well as for the detection. However, the TRIGA Mark I IPR R1, CDTN/CNEN has not been adapted for the irradiation and the usual gamma spectrometry has being carried out. To start the establishment of the LS-INAA, a 5g sample - IAEA/Soil 7 reference material was analyzed by k{sub 0}-standardized method. This paper is about the detector efficiency over the volume source using KayWin v2.23 and ANGLE V3.0 software. (author)

  9. A study of diabetes mellitus within a large sample of Australian twins

    DEFF Research Database (Denmark)

    Condon, Julianne; Shaw, Joanne E; Luciano, Michelle

    2008-01-01

    with type 2 diabetes (T2D), 41 female pairs with gestational diabetes (GD), 5 pairs with impaired glucose tolerance (IGT) and one pair with MODY. Heritabilities of T1D, T2D and GD were all high, but our samples did not have the power to detect effects of shared environment unless they were very large......Twin studies of diabetes mellitus can help elucidate genetic and environmental factors in etiology and can provide valuable biological samples for testing functional hypotheses, for example using expression and methylation studies of discordant pairs. We searched the volunteer Australian Twin...... Registry (19,387 pairs) for twins with diabetes using disease checklists from nine different surveys conducted from 1980-2000. After follow-up questionnaires to the twins and their doctors to confirm diagnoses, we eventually identified 46 pairs where one or both had type 1 diabetes (T1D), 113 pairs...

  10. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  11. Psychometric Evaluation of the Thought–Action Fusion Scale in a Large Clinical Sample

    Science.gov (United States)

    Meyer, Joseph F.; Brown, Timothy A.

    2015-01-01

    This study examined the psychometric properties of the 19-item Thought–Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct. PMID:22315482

  12. Psychometric evaluation of the thought-action fusion scale in a large clinical sample.

    Science.gov (United States)

    Meyer, Joseph F; Brown, Timothy A

    2013-12-01

    This study examined the psychometric properties of the 19-item Thought-Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct.

  13. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  14. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  15. The challenge for genetic epidemiologists: how to analyze large numbers of SNPs in relation to complex diseases.

    Science.gov (United States)

    Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M

    2006-04-21

    Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association

  16. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    Science.gov (United States)

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  17. Recreating Raven's: software for systematically generating large numbers of Raven-like matrix problems with normed properties.

    Science.gov (United States)

    Matzen, Laura E; Benz, Zachary O; Dixon, Kevin R; Posey, Jamie; Kroger, James K; Speed, Ann E

    2010-05-01

    Raven's Progressive Matrices is a widely used test for assessing intelligence and reasoning ability (Raven, Court, & Raven, 1998). Since the test is nonverbal, it can be applied to many different populations and has been used all over the world (Court & Raven, 1995). However, relatively few matrices are in the sets developed by Raven, which limits their use in experiments requiring large numbers of stimuli. For the present study, we analyzed the types of relations that appear in Raven's original Standard Progressive Matrices (SPMs) and created a software tool that can combine the same types of relations according to parameters chosen by the experimenter, to produce very large numbers of matrix problems with specific properties. We then conducted a norming study in which the matrices we generated were compared with the actual SPMs. This study showed that the generated matrices both covered and expanded on the range of problem difficulties provided by the SPMs.

  18. A hard-to-read font reduces the framing effect in a large sample.

    Science.gov (United States)

    Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik

    2018-04-01

    How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.

  19. In-situ high resolution particle sampling by large time sequence inertial spectrometry

    International Nuclear Information System (INIS)

    Prodi, V.; Belosi, F.

    1990-09-01

    In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)

  20. Strong Law of Large Numbers for Countable Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degree

    Directory of Open Access Journals (Sweden)

    Bao Wang

    2014-01-01

    Full Text Available We study the strong law of large numbers for the frequencies of occurrence of states and ordered couples of states for countable Markov chains indexed by an infinite tree with uniformly bounded degree, which extends the corresponding results of countable Markov chains indexed by a Cayley tree and generalizes the relative results of finite Markov chains indexed by a uniformly bounded tree.

  1. TRAN-STAT: statistics for environmental studies, Number 22. Comparison of soil-sampling techniques for plutonium at Rocky Flats

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bernhardt, D.E.; Hahn, P.B.

    1983-01-01

    A summary of a field soil sampling study conducted around the Rocky Flats Colorado plant in May 1977 is preseted. Several different soil sampling techniques that had been used in the area were applied at four different sites. One objective was to comparethe average 239 - 240 Pu concentration values obtained by the various soil sampling techniques used. There was also interest in determining whether there are differences in the reproducibility of the various techniques and how the techniques compared with the proposed EPA technique of sampling to 1 cm depth. Statistically significant differences in average concentrations between the techniques were found. The differences could be largely related to the differences in sampling depth-the primary physical variable between the techniques. The reproducibility of the techniques was evaluated by comparing coefficients of variation. Differences between coefficients of variation were not statistically significant. Average (median) coefficients ranged from 21 to 42 percent for the five sampling techniques. A laboratory study indicated that various sample treatment and particle sizing techniques could increase the concentration of plutonium in the less than 10 micrometer size fraction by up to a factor of about 4 compared to the 2 mm size fraction

  2. A new fractionator principle with varying sampling fractions: exemplified by estimation of synapse number using electron microscopy

    DEFF Research Database (Denmark)

    Witgen, Brent Marvin; Grady, M. Sean; Nyengaard, Jens Randel

    2006-01-01

    The quantification of ultrastructure has been permanently improved by the application of new stereological principles. Both precision and efficiency have been enhanced. Here we report for the first time a fractionator method that can be applied at the electron microscopy level. This new design...... the total object number using section sampling fractions based on the average thickness of sections of variable thicknesses. As an alternative, this approach estimates the correct particle section sampling probability based on an estimator of the Horvitz-Thompson type, resulting in a theoretically more...

  3. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2015-01-01

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  4. On the Required Number of Antennas in a Point-to-Point Large-but-Finite MIMO System

    KAUST Repository

    Makki, Behrooz

    2015-11-12

    In this paper, we investigate the performance of the point-to-point multiple-input-multiple-output (MIMO) systems in the presence of a large but finite numbers of antennas at the transmitters and/or receivers. Considering the cases with and without hybrid automatic repeat request (HARQ) feedback, we determine the minimum numbers of the transmit/receive antennas which are required to satisfy different outage probability constraints. We study the effect of the spatial correlation between the antennas on the system performance. Also, the required number of antennas are obtained for different fading conditions. Our results show that different outage requirements can be satisfied with relatively few transmit/receive antennas. © 2015 IEEE.

  5. Direct and large eddy simulation of turbulent heat transfer at very low Prandtl number: Application to lead–bismuth flows

    International Nuclear Information System (INIS)

    Bricteux, L.; Duponcheel, M.; Winckelmans, G.; Tiselj, I.; Bartosiewicz, Y.

    2012-01-01

    Highlights: ► We perform direct and hybrid-large eddy simulations of high Reynolds and low Prandtl turbulent wall-bounded flows with heat transfer. ► We use a state-of-the-art numerical methods with low energy dissipation and low dispersion. ► We use recent multiscalesubgrid scale models. ► Important results concerning the establishment of near wall modeling strategy in RANS are provided. ► The turbulent Prandtl number that is predicted by our simulation is different than that proposed by some correlations of the literature. - Abstract: This paper deals with the issue of modeling convective turbulent heat transfer of a liquid metal with a Prandtl number down to 0.01, which is the order of magnitude of lead–bismuth eutectic in a liquid metal reactor. This work presents a DNS (direct numerical simulation) and a LES (large eddy simulation) of a channel flow at two different Reynolds numbers, and the results are analyzed in the frame of best practice guidelines for RANS (Reynolds averaged Navier–Stokes) computations used in industrial applications. They primarily show that the turbulent Prandtl number concept should be used with care and that even recent proposed correlations may not be sufficient.

  6. Neurocognitive impairment in a large sample of homeless adults with mental illness.

    Science.gov (United States)

    Stergiopoulos, V; Cusi, A; Bekele, T; Skosireva, A; Latimer, E; Schütz, C; Fernando, I; Rourke, S B

    2015-04-01

    This study examines neurocognitive functioning in a large, well-characterized sample of homeless adults with mental illness and assesses demographic and clinical factors associated with neurocognitive performance. A total of 1500 homeless adults with mental illness enrolled in the At Home Chez Soi study completed neuropsychological measures assessing speed of information processing, memory, and executive functioning. Sociodemographic and clinical data were also collected. Linear regression analyses were conducted to examine factors associated with neurocognitive performance. Approximately half of our sample met criteria for psychosis, major depressive disorder, and alcohol or substance use disorder, and nearly half had experienced severe traumatic brain injury. Overall, 72% of participants demonstrated cognitive impairment, including deficits in processing speed (48%), verbal learning (71%) and recall (67%), and executive functioning (38%). The overall statistical model explained 19.8% of the variance in the neurocognitive summary score, with reduced neurocognitive performance associated with older age, lower education, first language other than English or French, Black or Other ethnicity, and the presence of psychosis. Homeless adults with mental illness experience impairment in multiple neuropsychological domains. Much of the variance in our sample's cognitive performance remains unexplained, highlighting the need for further research in the mechanisms underlying cognitive impairment in this population. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Sampling data summary for the ninth run of the Large Slurry Fed Melter

    International Nuclear Information System (INIS)

    Sabatino, D.M.

    1983-01-01

    The ninth experimental run of the Large Slurry Fed Melter (LSFM) was completed June 27, 1983, after 63 days of continuous operation. During the run, the various melter and off-gas streams were sampled and analyzed to determine melter material balances and to characterize off-gas emissions. Sampling methods and preliminary results were reported earlier. The emphasis was on the chemical analyses of the off-gas entrainment, deposits, and scrubber liquid. The significant sampling results from the run are summarized below: Flushing the Frit 165 with Frit 131 without bubbler agitation required 3 to 4.5 melter volumes. The off-gas cesium concentration during feeding was on the order of 36 to 56 μgCs/scf. The cesium concentration in the melter plenum (based on air in leakage only) was on the order of 110 to 210 μgCs/scf. Using <1 micron as the cut point for semivolatile material 60% of the chloride, 35% of the sodium and less than 5% of the managanese and iron in the entrainment are present as semivolatiles. A material balance on the scrubber tank solids shows good agreement with entrainment data. An overall cesium balance using LSFM-9 data and the DWPF production rate indicates an emission of 0.11 mCi/yr of cesium from the DWPF off-gas. This is a factor of 27 less than the maximum allowable 3 mCi/yr

  8. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  9. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  10. Characterisation of large zooplankton sampled with two different gears during midwinter in Rijpfjorden, Svalbard

    Directory of Open Access Journals (Sweden)

    Błachowiak-Samołyk Katarzyna

    2017-12-01

    Full Text Available During a midwinter cruise north of 80°N to Rijpfjorden, Svalbard, the composition and vertical distribution of the zooplankton community were studied using two different samplers 1 a vertically hauled multiple plankton sampler (MPS; mouth area 0.25 m2, mesh size 200 μm and 2 a horizontally towed Methot Isaacs Kidd trawl (MIK; mouth area 3.14 m2, mesh size 1500 μm. Our results revealed substantially higher species diversity (49 taxa than if a single sampler (MPS: 38 taxa, MIK: 28 had been used. The youngest stage present (CIII of Calanus spp. (including C. finmarchicus and C. glacialis was sampled exclusively by the MPS, and the frequency of CIV copepodites in MPS was double that than in MIK samples. In contrast, catches of the CV-CVI copepodites of Calanus spp. were substantially higher in the MIK samples (3-fold and 5-fold higher for adult males and females, respectively. The MIK sampling clearly showed that the highest abundances of all three Thysanoessa spp. were in the upper layers, although there was a tendency for the larger-sized euphausiids to occur deeper. Consistent patterns for the vertical distributions of the large zooplankters (e.g. ctenophores, euphausiids collected by the MPS and MIK samplers provided more complete data on their abundances and sizes than obtained by the single net. Possible mechanisms contributing to the observed patterns of distribution, e.g. high abundances of both Calanus spp. and their predators (ctenophores and chaetognaths in the upper water layers during midwinter are discussed.

  11. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    Science.gov (United States)

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Development and validation of InnoQuant™, a sensitive human DNA quantitation and degradation assessment method for forensic samples using high copy number mobile elements Alu and SVA.

    Science.gov (United States)

    Pineda, Gina M; Montgomery, Anne H; Thompson, Robyn; Indest, Brooke; Carroll, Marion; Sinha, Sudhir K

    2014-11-01

    There is a constant need in forensic casework laboratories for an improved way to increase the first-pass success rate of forensic samples. The recent advances in mini STR analysis, SNP, and Alu marker systems have now made it possible to analyze highly compromised samples, yet few tools are available that can simultaneously provide an assessment of quantity, inhibition, and degradation in a sample prior to genotyping. Currently there are several different approaches used for fluorescence-based quantification assays which provide a measure of quantity and inhibition. However, a system which can also assess the extent of degradation in a forensic sample will be a useful tool for DNA analysts. Possessing this information prior to genotyping will allow an analyst to more informatively make downstream decisions for the successful typing of a forensic sample without unnecessarily consuming DNA extract. Real-time PCR provides a reliable method for determining the amount and quality of amplifiable DNA in a biological sample. Alu are Short Interspersed Elements (SINE), approximately 300bp insertions which are distributed throughout the human genome in large copy number. The use of an internal primer to amplify a segment of an Alu element allows for human specificity as well as high sensitivity when compared to a single copy target. The advantage of an Alu system is the presence of a large number (>1000) of fixed insertions in every human genome, which minimizes the individual specific variation possible when using a multi-copy target quantification system. This study utilizes two independent retrotransposon genomic targets to obtain quantification of an 80bp "short" DNA fragment and a 207bp "long" DNA fragment in a degraded DNA sample in the multiplex system InnoQuant™. The ratio of the two quantitation values provides a "Degradation Index", or a qualitative measure of a sample's extent of degradation. The Degradation Index was found to be predictive of the observed loss

  13. Ultrasensitive multiplex optical quantification of bacteria in large samples of biofluids

    Science.gov (United States)

    Pazos-Perez, Nicolas; Pazos, Elena; Catala, Carme; Mir-Simon, Bernat; Gómez-de Pedro, Sara; Sagales, Juan; Villanueva, Carlos; Vila, Jordi; Soriano, Alex; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.

    2016-01-01

    Efficient treatments in bacterial infections require the fast and accurate recognition of pathogens, with concentrations as low as one per milliliter in the case of septicemia. Detecting and quantifying bacteria in such low concentrations is challenging and typically demands cultures of large samples of blood (~1 milliliter) extending over 24–72 hours. This delay seriously compromises the health of patients. Here we demonstrate a fast microorganism optical detection system for the exhaustive identification and quantification of pathogens in volumes of biofluids with clinical relevance (~1 milliliter) in minutes. We drive each type of bacteria to accumulate antibody functionalized SERS-labelled silver nanoparticles. Particle aggregation on the bacteria membranes renders dense arrays of inter-particle gaps in which the Raman signal is exponentially amplified by several orders of magnitude relative to the dispersed particles. This enables a multiplex identification of the microorganisms through the molecule-specific spectral fingerprints. PMID:27364357

  14. A Survey for Spectroscopic Binaries in a Large Sample of G Dwarfs

    Science.gov (United States)

    Udry, S.; Mayor, M.; Latham, D. W.; Stefanik, R. P.; Torres, G.; Mazeh, T.; Goldberg, D.; Andersen, J.; Nordstrom, B.

    For more than 5 years now, the radial velocities for a large sample of G dwarfs (3,347 stars) have been monitored in order to obtain an unequaled set of orbital parameters for solar-type stars (~400 orbits, up to now). This survey provides a considerable improvement on the classical systematic study by Duquennoy and Mayor (1991; DM91). The observational part of the survey has been carried out in the context of a collaboration between the Geneva Observatory on the two coravel spectrometers for the southern sky and CfA at Oakridge and Whipple Observatories for the northern sky. As a first glance at these new results, we will address in this contribution a special aspect of the orbital eccentricity distribution, namely the disappearance of the void observed in DM91 for quasi-circular orbits with periods larger than 10 days.

  15. Comprehensive metabolic characterization of serum osteocalcin action in a large non-diabetic sample.

    Directory of Open Access Journals (Sweden)

    Lukas Entenmann

    Full Text Available Recent research suggested a metabolic implication of osteocalcin (OCN in e.g. insulin sensitivity or steroid production. We used an untargeted metabolomics approach by analyzing plasma and urine samples of 931 participants using mass spectrometry to reveal further metabolic actions of OCN. Several detected relations between OCN and metabolites were strongly linked to renal function, however, a number of associations remained significant after adjustment for renal function. Intermediates of proline catabolism were associated with OCN reflecting the implication in bone metabolism. The association to kynurenine points towards a pro-inflammatory state with increasing OCN. Inverse relations with intermediates of branch-chained amino acid metabolism suggest a link to energy metabolism. Finally, urinary surrogate markers of smoking highlight its adverse effect on OCN metabolism. In conclusion, the present study provides a read-out of metabolic actions of OCN. However, most of the associations were weak arguing for a limited role of OCN in whole-body metabolism.

  16. Comprehensive metabolic characterization of serum osteocalcin action in a large non-diabetic sample.

    Science.gov (United States)

    Entenmann, Lukas; Pietzner, Maik; Artati, Anna; Hannemann, Anke; Henning, Ann-Kristin; Kastenmüller, Gabi; Völzke, Henry; Nauck, Matthias; Adamski, Jerzy; Wallaschofski, Henri; Friedrich, Nele

    2017-01-01

    Recent research suggested a metabolic implication of osteocalcin (OCN) in e.g. insulin sensitivity or steroid production. We used an untargeted metabolomics approach by analyzing plasma and urine samples of 931 participants using mass spectrometry to reveal further metabolic actions of OCN. Several detected relations between OCN and metabolites were strongly linked to renal function, however, a number of associations remained significant after adjustment for renal function. Intermediates of proline catabolism were associated with OCN reflecting the implication in bone metabolism. The association to kynurenine points towards a pro-inflammatory state with increasing OCN. Inverse relations with intermediates of branch-chained amino acid metabolism suggest a link to energy metabolism. Finally, urinary surrogate markers of smoking highlight its adverse effect on OCN metabolism. In conclusion, the present study provides a read-out of metabolic actions of OCN. However, most of the associations were weak arguing for a limited role of OCN in whole-body metabolism.

  17. Automated, feature-based image alignment for high-resolution imaging mass spectrometry of large biological samples

    NARCIS (Netherlands)

    Broersen, A.; Liere, van R.; Altelaar, A.F.M.; Heeren, R.M.A.; McDonnell, L.A.

    2008-01-01

    High-resolution imaging mass spectrometry of large biological samples is the goal of several research groups. In mosaic imaging, the most common method, the large sample is divided into a mosaic of small areas that are then analyzed with high resolution. Here we present an automated alignment

  18. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  19. A Multilayer Secure Biomedical Data Management System for Remotely Managing a Very Large Number of Diverse Personal Healthcare Devices

    Directory of Open Access Journals (Sweden)

    KeeHyun Park

    2015-01-01

    Full Text Available In this paper, a multilayer secure biomedical data management system for managing a very large number of diverse personal health devices is proposed. The system has the following characteristics: the system supports international standard communication protocols to achieve interoperability. The system is integrated in the sense that both a PHD communication system and a remote PHD management system work together as a single system. Finally, the system proposed in this paper provides user/message authentication processes to securely transmit biomedical data measured by PHDs based on the concept of a biomedical signature. Some experiments, including the stress test, have been conducted to show that the system proposed/constructed in this study performs very well even when a very large number of PHDs are used. For a stress test, up to 1,200 threads are made to represent the same number of PHD agents. The loss ratio of the ISO/IEEE 11073 messages in the normal system is as high as 14% when 1,200 PHD agents are connected. On the other hand, no message loss occurs in the multilayered system proposed in this study, which demonstrates the superiority of the multilayered system to the normal system with regard to heavy traffic.

  20. Toward Rapid Unattended X-ray Tomography of Large Planar Samples at 50-nm Resolution

    International Nuclear Information System (INIS)

    Rudati, J.; Tkachuk, A.; Gelb, J.; Hsu, G.; Feng, Y.; Pastrick, R.; Lyon, A.; Trapp, D.; Beetz, T.; Chen, S.; Hornberger, B.; Seshadri, S.; Kamath, S.; Zeng, X.; Feser, M.; Yun, W.; Pianetta, P.; Andrews, J.; Brennan, S.; Chu, Y. S.

    2009-01-01

    X-ray tomography at sub-50 nm resolution of small areas (∼15 μmx15 μm) are routinely performed with both laboratory and synchrotron sources. Optics and detectors for laboratory systems have been optimized to approach the theoretical efficiency limit. Limited by the availability of relatively low-brightness laboratory X-ray sources, exposure times for 3-D data sets at 50 nm resolution are still many hours up to a full day. However, for bright synchrotron sources, the use of these optimized imaging systems results in extremely short exposure times, approaching live-camera speeds at the Advanced Photon Source at Argonne National Laboratory near Chicago in the US These speeds make it possible to acquire a full tomographic dataset at 50 nm resolution in less than a minute of true X-ray exposure time. However, limits in the control and positioning system lead to large overhead that results in typical exposure times of ∼15 min currently.We present our work on the reduction and elimination of system overhead and toward complete automation of the data acquisition process. The enhancements underway are primarily to boost the scanning rate, sample positioning speed, and illumination homogeneity to performance levels necessary for unattended tomography of large areas (many mm 2 in size). We present first results on this ongoing project.

  1. Pattern transfer on large samples using a sub-aperture reactive ion beam

    Energy Technology Data Exchange (ETDEWEB)

    Miessler, Andre; Mill, Agnes; Gerlach, Juergen W.; Arnold, Thomas [Leibniz-Institut fuer Oberflaechenmodifizierung (IOM), Permoserstrasse 15, D-04318 Leipzig (Germany)

    2011-07-01

    In comparison to sole Ar ion beam sputtering Reactive Ion Beam Etching (RIBE) reveals the main advantage of increasing the selectivity for different kind of materials due to chemical contributions during the material removal. Therefore RIBE is qualified to be an excellent candidate for pattern transfer applications. The goal of the present study is to apply a sub-aperture reactive ion beam for pattern transfer on large fused silica samples. Concerning this matter, the etching behavior in the ion beam periphery plays a decisive role. Using CF{sub 4} as reactive gas, XPS measurements of the modified surface exposes impurities like Ni, Fe and Cr, which belongs to chemically eroded material of the plasma pot as well as an accumulation of carbon (up to 40 atomic percent) in the beam periphery, respectively. The substitution of CF{sub 4} by NF{sub 3} as reactive gas reveals a lot of benefits: more stable ion beam conditions in combination with a reduction of the beam size down to a diameter of 5 mm and a reduced amount of the Ni, Fe and Cr contaminations. However, a layer formation of silicon nitride handicaps the chemical contribution of the etching process. These negative side effects influence the transfer of trench structures on quartz by changing the selectivity due to altered chemical reaction of the modified resist layer. Concerning this we investigate the pattern transfer on large fused silica plates using NF{sub 3}-sub-aperture RIBE.

  2. Detecting superior face recognition skills in a large sample of young British adults

    Directory of Open Access Journals (Sweden)

    Anna Katarzyna Bobak

    2016-09-01

    Full Text Available The Cambridge Face Memory Test Long Form (CFMT+ and Cambridge Face Perception Test (CFPT are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognisers are discussed.

  3. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  4. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  5. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  6. Prevalence of learned grapheme-color pairings in a large online sample of synesthetes.

    Directory of Open Access Journals (Sweden)

    Nathan Witthoft

    Full Text Available In this paper we estimate the minimum prevalence of grapheme-color synesthetes with letter-color matches learned from an external stimulus, by analyzing a large sample of English-speaking grapheme-color synesthetes. We find that at least 6% (400/6588 participants of the total sample learned many of their matches from a widely available colored letter toy. Among those born in the decade after the toy began to be manufactured, the proportion of synesthetes with learned letter-color pairings approaches 15% for some 5-year periods. Among those born 5 years or more before it was manufactured, none have colors learned from the toy. Analysis of the letter-color matching data suggests the only difference between synesthetes with matches to the toy and those without is exposure to the stimulus. These data indicate learning of letter-color pairings from external contingencies can occur in a substantial fraction of synesthetes, and are consistent with the hypothesis that grapheme-color synesthesia is a kind of conditioned mental imagery.

  7. Large magnitude gridded ionization chamber for impurity identification in alpha emitting radioactive samples

    International Nuclear Information System (INIS)

    Santos, R.N. dos.

    1992-01-01

    This paper refers to a large magnitude gridded ionization chamber with high resolution used in the identification of α radioactive samples. The chamber and the electrode have been described in terms of their geometry and dimensions, as well as the best results listed accordingly. Several α emitting radioactive samples were used with a gas mixture of 90% Argon plus 10% Methane. We got α energy spectrum with resolution around 22,14 KeV in agreement to the best results available in the literature. The spectrum of α energy related to 92 U 233 was gotten using the ionization chamber mentioned in this work; several values were found which matched perfectly well adjustment curve of the chamber. Many other additional measures using different kinds of adjusted detectors were successfully obtained in order to confirm the results gotten in the experiments, thus leading to the identification of some elements of the 92 U 233 radioactive series. Such results show the possibility of using the chamber mentioned for measurements of α low activity contamination. (author)

  8. Large contribution of human papillomavirus in vaginal neoplastic lesions: a worldwide study in 597 samples.

    Science.gov (United States)

    Alemany, L; Saunier, M; Tinoco, L; Quirós, B; Alvarado-Cabrero, I; Alejo, M; Joura, E A; Maldonado, P; Klaustermeier, J; Salmerón, J; Bergeron, C; Petry, K U; Guimerà, N; Clavero, O; Murillo, R; Clavel, C; Wain, V; Geraets, D T; Jach, R; Cross, P; Carrilho, C; Molina, C; Shin, H R; Mandys, V; Nowakowski, A M; Vidal, A; Lombardi, L; Kitchener, H; Sica, A R; Magaña-León, C; Pawlita, M; Quint, W; Bravo, I G; Muñoz, N; de Sanjosé, S; Bosch, F X

    2014-11-01

    This work describes the human papillomavirus (HPV) prevalence and the HPV type distribution in a large series of vaginal intraepithelial neoplasia (VAIN) grades 2/3 and vaginal cancer worldwide. We analysed 189 VAIN 2/3 and 408 invasive vaginal cancer cases collected from 31 countries from 1986 to 2011. After histopathological evaluation of sectioned formalin-fixed paraffin-embedded samples, HPV DNA detection and typing was performed using the SPF-10/DNA enzyme immunoassay (DEIA)/LiPA25 system (version 1). A subset of 146 vaginal cancers was tested for p16(INK4a) expression, a cellular surrogate marker for HPV transformation. Prevalence ratios were estimated using multivariate Poisson regression with robust variance. HPV DNA was detected in 74% (95% confidence interval (CI): 70-78%) of invasive cancers and in 96% (95% CI: 92-98%) of VAIN 2/3. Among cancers, the highest detection rates were observed in warty-basaloid subtype of squamous cell carcinomas, and in younger ages. Concerning the type-specific distribution, HPV16 was the most frequently type detected in both precancerous and cancerous lesions (59%). p16(INK4a) overexpression was found in 87% of HPV DNA positive vaginal cancer cases. HPV was identified in a large proportion of invasive vaginal cancers and in almost all VAIN 2/3. HPV16 was the most common type detected. A large impact in the reduction of the burden of vaginal neoplastic lesions is expected among vaccinated cohorts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Investigating sex differences in psychological predictors of snack intake among a large representative sample.

    Science.gov (United States)

    Adriaanse, Marieke A; Evers, Catharine; Verhoeven, Aukje A C; de Ridder, Denise T D

    2016-03-01

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of psychological eating-related variables. A community sample was employed to: (i) determine sex differences in (un)healthy snack consumption and psychological eating-related variables (e.g. emotional eating, intention to eat healthily); (ii) examine whether sex predicts energy intake from (un)healthy snacks over and above psychological variables; and (iii) investigate the relationship between psychological variables and snack intake for men and women separately. Snack consumption was assessed with a 7d snack diary; the psychological eating-related variables with questionnaires. Participants were members of an Internet survey panel that is based on a true probability sample of households in the Netherlands. Men and women (n 1292; 45 % male), with a mean age of 51·23 (sd 16·78) years and a mean BMI of 25·62 (sd 4·75) kg/m2. Results revealed that women consumed more healthy and less unhealthy snacks than men and they scored higher than men on emotional and restrained eating. Women also more often reported appearance and health-related concerns about their eating behaviour, but men and women did not differ with regard to external eating or their intentions to eat more healthily. The relationships between psychological eating-related variables and snack intake were similar for men and women, indicating that snack intake is predicted by the same variables for men and women. It is concluded that some small sex differences in psychological eating-related variables exist, but based on the present data there is no need for interventions aimed at promoting healthy eating to target different predictors according to sex.

  10. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  11. Assessing the validity of single-item life satisfaction measures: results from three large samples.

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E

    2014-12-01

    The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS)-a more psychometrically established measure. Two large samples from Washington (N = 13,064) and Oregon (N = 2,277) recruited by the Behavioral Risk Factor Surveillance System and a representative German sample (N = 1,312) recruited by the Germany Socio-Economic Panel were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62-0.64; disattenuated r = 0.78-0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001-0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS was very small (average absolute difference = 0.015-0.042). Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use.

  12. Economic and Humanistic Burden of Osteoarthritis: A Systematic Review of Large Sample Studies.

    Science.gov (United States)

    Xie, Feng; Kovic, Bruno; Jin, Xuejing; He, Xiaoning; Wang, Mengxiao; Silvestre, Camila

    2016-11-01

    Osteoarthritis (OA) consumes a significant amount of healthcare resources, and impairs the health-related quality of life (HRQoL) of patients. Previous reviews have consistently found substantial variations in the costs of OA across studies and countries. The comparability between studies was poor and limited the detection of the true differences between these studies. To review large sample studies on measuring the economic and/or humanistic burden of OA published since May 2006. We searched MEDLINE and EMBASE databases using comprehensive search strategies to identify studies reporting economic burden and HRQoL of OA. We included large sample studies if they had a sample size ≥1000 and measured the cost and/or HRQoL of OA. Reviewers worked independently and in duplicate, performing a cross-check between groups to verify agreement. Within- and between-group consolidation was performed to resolve discrepancies, with outstanding discrepancies being resolved by an arbitrator. The Kappa statistic was reported to assess the agreement between the reviewers. All costs were adjusted in their original currency to year 2015 using published inflation rates for the country where the study was conducted, and then converted to 2015 US dollars. A total of 651 articles were screened by title and abstract, 94 were reviewed in full text, and 28 were included in the final review. The Kappa value was 0.794. Twenty studies reported direct costs and nine reported indirect costs. The total annual average direct costs varied from US$1442 to US$21,335, both in USA. The annual average indirect costs ranged from US$238 to US$29,935. Twelve studies measured HRQoL using various instruments. The Short Form 12 version 2 scores ranged from 35.0 to 51.3 for the physical component, and from 43.5 to 55.0 for the mental component. Health utilities varied from 0.30 for severe OA to 0.77 for mild OA. Per-patient OA costs are considerable and a patient's quality of life remains poor. Variations in

  13. Association between time perspective and organic food consumption in a large sample of adults.

    Science.gov (United States)

    Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine

    2018-01-05

    Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of

  14. Slepian simulation of distributions of plastic displacements of earthquake excited shear frames with a large number of stories

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Ditlevsen, Ove

    2005-01-01

    The object of study is a stationary Gaussian white noise excited plane multistory shear frame with a large number of rigid traverses. All the traverse-connecting columns have finite symmetrical yield limits except the columns in one or more of the bottom floors. The columns behave linearly elastic...... within the yield limits and ideally plastic outside these without accumulating eigenstresses. Within the elastic domain the frame is modeled as a linearly damped oscillator. The white noise excitation acts on the mass of the first floor making the movement of the elastic bottom floors simulate a ground...

  15. Atomic Number Dependence of Hadron Production at Large Transverse Momentum in 300 GeV Proton--Nucleus Collisions

    Science.gov (United States)

    Cronin, J. W.; Frisch, H. J.; Shochet, M. J.; Boymond, J. P.; Mermod, R.; Piroue, P. A.; Sumner, R. L.

    1974-07-15

    In an experiment at the Fermi National Accelerator Laboratory we have compared the production of large transverse momentum hadrons from targets of W, Ti, and Be bombarded by 300 GeV protons. The hadron yields were measured at 90 degrees in the proton-nucleon c.m. system with a magnetic spectrometer equipped with 2 Cerenkov counters and a hadron calorimeter. The production cross-sections have a dependence on the atomic number A that grows with P{sub 1}, eventually leveling off proportional to A{sup 1.1}.

  16. Summary of experience from a large number of construction inspections; Wind power plant projects; Erfarenhetsaaterfoering fraan entreprenadbesiktningar

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Bertil; Holmberg, Rikard

    2010-08-15

    This report presents a summary of experience from a large number of construction inspections of wind power projects. The working method is based on the collection of construction experience in form of questionnaires. The questionnaires were supplemented by a number of in-depth interviews to understand more in detail what is perceived to be a problem and if there were suggestions for improvements. The results in this report is based on inspection protocols from 174 wind turbines, which corresponds to about one-third of the power plants built in the time period. In total the questionnaires included 4683 inspection remarks as well as about one hundred free text comments. 52 of the 174 inspected power stations were rejected, corresponding to 30%. It has not been possible to identify any over represented type of remark as a main cause of rejection, but the rejection is usually based on a total number of remarks that is too large. The average number of remarks for a power plant is 27. Most power stations have between 20 and 35 remarks. The most common remarks concern shortcomings in marking and documentation. These are easily adjusted, and may be regarded as less serious. There are, however, a number of remarks which are recurrent and quite serious, mainly regarding gearbox, education and lightning protection. Usually these are also easily adjusted, but the consequences if not corrected can be very large. The consequences may be either shortened life of expensive components, e.g. oil problems in gear boxes, or increased probability of serious accidents, e.g. maladjusted lightning protection. In the report, comparison between power stations with various construction period, size, supplier, geography and topography is also presented. The general conclusion is that the differences are small. The results of the evaluation of questionnaires correspond well with the result of the in-depth interviews with clients. The problem that clients agreed upon as the greatest is the lack

  17. The Effect of Sample Size and Data Numbering on Precision of Calibration Model to predict Soil Properties

    Directory of Open Access Journals (Sweden)

    H Mohamadi Monavar

    2017-10-01

    Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of

  18. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  19. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Daniel Pettersson

    2016-01-01

    later the growing importance of transnational agencies and international, regional and national assessments. How to reference this article Pettersson, D., Popkewitz, T. S., & Lindblad, S. (2016. On the Use of Educational Numbers: Comparative Constructions of Hierarchies by Means of Large-Scale Assessments. Espacio, Tiempo y Educación, 3(1, 177-202. doi: http://dx.doi.org/10.14516/ete.2016.003.001.10

  20. Study of the relationship between peaks scattering Rayleigh to Compton ratio and effective atomic number in biological samples

    International Nuclear Information System (INIS)

    Pereira, Marcelo O.; Conti, Claudio de Carvalho; Anjos, Marcelino J.; Lopes, Ricardo T.

    2011-01-01

    The aim of this work was to develop a new method to correct the absorbed radiation (the mass attenuation coefficient curve) in low energy (E B O 3 , Na 2 CO 3 , CaCO 3 , Al 2 O 3 , K 2 SO 4 and MgO) of radiation produced by a gamma-ray source of Am-241(59.54 keV) also applied to certified biological samples of milk powder, hay powder and bovine liver (NIST 155 7B). In addition, six methods of effective atomic number determination were used as described in literature to determinate the Rayleigh to Compton scattering ratio (R/C) , in order to calculate the mass attenuation coefficient. The results obtained by the proposed method were compared with those obtained using the transmission method. The experimental results were in good agreement with transmission values suggesting that the method to correct radiation absorption presented in this paper is adequate for biological samples. (author)

  1. Molecular characterization of Leptospira sp by multilocus variable number tandem repeat analysis (MLVA from clinical samples: a case report

    Directory of Open Access Journals (Sweden)

    Hélène Pailhoriès

    2015-08-01

    Full Text Available Leptospirosis is a zoonotic infection for which diagnosis is difficult. It has appeared as a global emerging infectious disease over recent years. Genotype determination often requires a Leptospira strain obtained by culture, which is a long and fastidious technique. A method based on multilocus variable number tandem repeat analysis (MLVA to determine the genotype of Leptospira interrogans, performed directly on blood or urine samples, is proposed. This method was applied to a fatal case of leptospirosis for which the geographical origin of infection was unknown. This technique will allow a genotype to be obtained for L. interrogans, even when cultures remain negative.

  2. A Genome-Wide Association Study in Large White and Landrace Pig Populations for Number Piglets Born Alive

    Science.gov (United States)

    Bergfelder-Drüing, Sarah; Grosse-Brinkhaus, Christine; Lind, Bianca; Erbe, Malena; Schellander, Karl; Simianer, Henner; Tholen, Ernst

    2015-01-01

    The number of piglets born alive (NBA) per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS) were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3), Austria (1) and Switzerland (1). The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected. PMID:25781935

  3. A genome-wide association study in large white and landrace pig populations for number piglets born alive.

    Directory of Open Access Journals (Sweden)

    Sarah Bergfelder-Drüing

    Full Text Available The number of piglets born alive (NBA per litter is one of the most important traits in pig breeding due to its influence on production efficiency. It is difficult to improve NBA because the heritability of the trait is low and it is governed by a high number of loci with low to moderate effects. To clarify the biological and genetic background of NBA, genome-wide association studies (GWAS were performed using 4,012 Large White and Landrace pigs from herdbook and commercial breeding companies in Germany (3, Austria (1 and Switzerland (1. The animals were genotyped with the Illumina PorcineSNP60 BeadChip. Because of population stratifications within and between breeds, clusters were formed using the genetic distances between the populations. Five clusters for each breed were formed and analysed by GWAS approaches. In total, 17 different significant markers affecting NBA were found in regions with known effects on female reproduction. No overlapping significant chromosome areas or QTL between Large White and Landrace breed were detected.

  4. Ice nucleating particles from a large-scale sampling network: insight into geographic and temporal variability

    Science.gov (United States)

    Schrod, Jann; Weber, Daniel; Thomson, Erik S.; Pöhlker, Christopher; Saturno, Jorge; Artaxo, Paulo; Curtius, Joachim; Bingemer, Heinz

    2017-04-01

    The number concentration of ice nucleating particles (INP) is an important, yet under quantified atmospheric parameter. The temporal and geographic extent of observations worldwide remains relatively small, with many regions of the world (even whole continents and oceans), almost completely unrepresented by observational data. Measurements at pristine sites are particularly rare, but all the more valuable because such observations are necessary to estimate the pre-industrial baseline of aerosol and cloud related parameters that are needed to better understand the climate system and forecast future scenarios. As a partner of BACCHUS we began in September 2014 to operate an INP measurement network of four sampling stations, with a global geographic distribution. The stations are located at unique sites reaching from the Arctic to the equator: the Amazonian Tall Tower Observatory ATTO in Brazil, the Observatoire Volcanologique et Sismologique on the island of Martinique in the Caribbean Sea, the Zeppelin Observatory at Svalbard in the Norwegian Arctic and the Taunus Observatory near Frankfurt, Germany. Since 2014 samples were collected regularly by electrostatic precipitation of aerosol particles onto silicon substrates. The INP on the substrate are activated and analyzed in the isothermal static diffusion chamber FRIDGE at temperatures between -20°C and -30°C and relative humidity with respect to ice from 115 to 135%. Here we present data from the years 2015 and 2016 from this novel INP network and from selected campaign-based measurements from remote sites, including the Mt. Kenya GAW station. Acknowledgements The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) project BACCHUS under grant agreement No 603445 and the Deutsche Forschungsgemeinschaft (DFG) under the Research Unit FOR 1525 (INUIT).

  5. Factors associated with self-reported number of teeth in a large national cohort of Thai adults

    Directory of Open Access Journals (Sweden)

    Yiengprugsawan Vasoontara

    2011-11-01

    Full Text Available Abstract Background Oral health in later life results from individual's lifelong accumulation of experiences at the personal, community and societal levels. There is little information relating the oral health outcomes to risk factors in Asian middle-income settings such as Thailand today. Methods Data derived from a cohort of 87,134 adults enrolled in Sukhothai Thammathirat Open University who completed self-administered questionnaires in 2005. Cohort members are aged between 15 and 87 years and resided throughout Thailand. This is a large study of self-reported number of teeth among Thai adults. Bivariate and multivariate logistic regressions were used to analyse factors associated with self-reported number of teeth. Results After adjusting for covariates, being female (OR = 1.28, older age (OR = 10.6, having low income (OR = 1.45, having lower education (OR = 1.33, and being a lifetime urban resident (OR = 1.37 were statistically associated (p Conclusions This study addresses the gap in knowledge on factors associated with self-reported number of teeth. The promotion of healthy childhoods and adult lifestyles are important public health interventions to increase tooth retention in middle and older age.

  6. Study of a large rapid ashing apparatus and a rapid dry ashing method for biological samples and its application

    International Nuclear Information System (INIS)

    Jin Meisun; Wang Benli; Liu Wencang

    1988-04-01

    A large rapid-dry-ashing apparatus and a rapid ashing method for biological samples are described. The apparatus consists of specially made ashing furnace, gas supply system and temperature-programming control cabinet. The following adventages have been showed by ashing experiment with the above apparatus: (1) high speed of ashing and saving of electric energy; (2) The apparatus can ash a large amount of samples at a time; (3) The ashed sample is pure white (or spotless), loose and easily soluble with few content of residual char; (4) The fresh sample can also be ashed directly. The apparatus is suitable for ashing a large amount of the environmental samples containing low level radioactivity trace elements and the medical, food and agricultural research samples

  7. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Using Co-Occurrence to Evaluate Belief Coherence in a Large Non Clinical Sample

    Science.gov (United States)

    Pechey, Rachel; Halligan, Peter

    2012-01-01

    Much of the recent neuropsychological literature on false beliefs (delusions) has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow “cohere” with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs). The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural). Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian’s coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance. PMID:23155383

  9. Explaining health care expenditure variation: large-sample evidence using linked survey and health administrative data.

    Science.gov (United States)

    Ellis, Randall P; Fiebig, Denzil G; Johar, Meliyanni; Jones, Glenn; Savage, Elizabeth

    2013-09-01

    Explaining individual, regional, and provider variation in health care spending is of enormous value to policymakers but is often hampered by the lack of individual level detail in universal public health systems because budgeted spending is often not attributable to specific individuals. Even rarer is self-reported survey information that helps explain this variation in large samples. In this paper, we link a cross-sectional survey of 267 188 Australians age 45 and over to a panel dataset of annual healthcare costs calculated from several years of hospital, medical and pharmaceutical records. We use this data to distinguish between cost variations due to health shocks and those that are intrinsic (fixed) to an individual over three years. We find that high fixed expenditures are positively associated with age, especially older males, poor health, obesity, smoking, cancer, stroke and heart conditions. Being foreign born, speaking a foreign language at home and low income are more strongly associated with higher time-varying expenditures, suggesting greater exposure to adverse health shocks. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Using co-occurrence to evaluate belief coherence in a large non clinical sample.

    Directory of Open Access Journals (Sweden)

    Rachel Pechey

    Full Text Available Much of the recent neuropsychological literature on false beliefs (delusions has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow "cohere" with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs. The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural. Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian's coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance.

  11. BROAD ABSORPTION LINE DISAPPEARANCE ON MULTI-YEAR TIMESCALES IN A LARGE QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Filiz Ak, N.; Brandt, W. N.; Schneider, D. P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario M3J 1P3 (Canada); Anderson, S. F.; Gibson, R. R. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States); Lundgren, B. F. [Department of Physics, Yale University, New Haven, CT 06511 (United States); Myers, A. D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Petitjean, P. [Institut d' Astrophysique de Paris, Universite Paris 6, F-75014, Paris (France); Ross, Nicholas P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 92420 (United States); Shen Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-51, Cambridge, MA 02138 (United States); York, D. G. [Department of Astronomy and Astrophysics, and Enrico Fermi Institute, University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 (United States); Bizyaev, D.; Brinkmann, J.; Malanushenko, E.; Oravetz, D. J.; Pan, K.; Simmons, A. E. [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349-0059 (United States); Weaver, B. A., E-mail: nfilizak@astro.psu.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2012-10-01

    We present 21 examples of C IV broad absorption line (BAL) trough disappearance in 19 quasars selected from systematic multi-epoch observations of 582 bright BAL quasars (1.9 < z < 4.5) by the Sloan Digital Sky Survey-I/II (SDSS-I/II) and SDSS-III. The observations span 1.1-3.9 yr rest-frame timescales, longer than have been sampled in many previous BAL variability studies. On these timescales, Almost-Equal-To 2.3% of C IV BAL troughs disappear and Almost-Equal-To 3.3% of BAL quasars show a disappearing trough. These observed frequencies suggest that many C IV BAL absorbers spend on average at most a century along our line of sight to their quasar. Ten of the 19 BAL quasars showing C IV BAL disappearance have apparently transformed from BAL to non-BAL quasars; these are the first reported examples of such transformations. The BAL troughs that disappear tend to be those with small-to-moderate equivalent widths, relatively shallow depths, and high outflow velocities. Other non-disappearing C IV BALs in those nine objects having multiple troughs tend to weaken when one of them disappears, indicating a connection between the disappearing and non-disappearing troughs, even for velocity separations as large as 10,000-15,000 km s{sup -1}. We discuss possible origins of this connection including disk-wind rotation and changes in shielding gas.

  12. Quantitative Examination of a Large Sample of Supra-Arcade Downflows in Eruptive Solar Flares

    Science.gov (United States)

    Savage, Sabrina L.; McKenzie, David E.

    2011-01-01

    Sunward-flowing voids above post-coronal mass ejection flare arcades were first discovered using the soft X-ray telescope aboard Yohkoh and have since been observed with TRACE (extreme ultraviolet (EUV)), SOHO/LASCO (white light), SOHO/SUMER (EUV spectra), and Hinode/XRT (soft X-rays). Supra-arcade downflow (SAD) observations suggest that they are the cross-sections of thin flux tubes retracting from a reconnection site high in the corona. Supra-arcade downflowing loops (SADLs) have also been observed under similar circumstances and are theorized to be SADs viewed from a perpendicular angle. Although previous studies have focused on dark flows because they are easier to detect and complementary spectral data analysis reveals their magnetic nature, the signal intensity of the flows actually ranges from dark to bright. This implies that newly reconnected coronal loops can contain a range of hot plasma density. Previous studies have presented detailed SAD observations for a small number of flares. In this paper, we present a substantial SADs and SADLs flare catalog. We have applied semiautomatic detection software to several of these events to detect and track individual downflows thereby providing statistically significant samples of parameters such as velocity, acceleration, area, magnetic flux, shrinkage energy, and reconnection rate. We discuss these measurements (particularly the unexpected result of the speeds being an order of magnitude slower than the assumed Alfven speed), how they were obtained, and potential impact on reconnection models.

  13. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  14. Thinking about dying and trying and intending to die: results on suicidal behavior from a large Web-based sample.

    Science.gov (United States)

    de Araújo, Rafael M F; Mazzochi, Leonardo; Lara, Diogo R; Ottoni, Gustavo L

    2015-03-01

    Suicide is an important worldwide public health problem. The aim of this study was to characterize risk factors of suicidal behavior using a large Web-based sample. The data were collected by the Brazilian Internet Study on Temperament and Psychopathology (BRAINSTEP) from November 2010 to July 2011. Suicidal behavior was assessed by an instrument based on the Suicidal Behaviors Questionnaire. The final sample consisted of 48,569 volunteers (25.9% men) with a mean ± SD age of 30.7 ± 10.1 years. More than 60% of the sample reported having had at least a passing thought of killing themselves, and 6.8% of subjects had previously attempted suicide (64% unplanned). The demographic features with the highest risk of attempting suicide were female gender (OR = 1.82, 95% CI = 1.65 to 2.00); elementary school as highest education level completed (OR = 2.84, 95% CI = 2.48 to 3.25); being unable to work (OR = 5.32, 95% CI = 4.15 to 6.81); having no religion (OR = 2.08, 95% CI = 1.90 to 2.29); and, only for female participants, being married (OR = 1.19, 95% CI = 1.08 to 1.32) or divorced (OR = 1.66, 95% CI = 1.41 to 1.96). A family history of a suicide attempt and of a completed suicide showed the same increment in the risk of suicidal behavior. The higher the number of suicide attempts, the higher was the real intention to die (P < .05). Those who really wanted to die reported more emptiness/loneliness (OR = 1.58, 95% CI = 1.35 to 1.85), disconnection (OR = 1.54, 95% CI = 1.30 to 1.81), and hopelessness (OR = 1.74, 95% CI = 1.49 to 2.03), but their methods were not different from the methods of those who did not mean to die. This large Web survey confirmed results from previous studies on suicidal behavior and pointed out the relevance of the number of previous suicide attempts and of a positive family history, even for a noncompleted suicide, as important risk factors. © Copyright 2015 Physicians Postgraduate Press, Inc.

  15. A large-capacity sample-changer for automated gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An automatic sample-changer has been developed at the National Institute for Metallurgy for use in gamma-ray spectroscopy with a lithium-drifted germanium detector. The sample-changer features remote storage, which prevents cross-talk and reduces background. It has a capacity for 200 samples and a sample container that takes liquid or solid samples. The rotation and vibration of samples during counting ensure that powdered samples are compacted, and improve the precision and reproducibility of the counting geometry [af

  16. Post-traumatic stress syndrome in a large sample of older adults: determinants and quality of life.

    Science.gov (United States)

    Lamoureux-Lamarche, Catherine; Vasiliadis, Helen-Maria; Préville, Michel; Berbiche, Djamal

    2016-01-01

    The aims of this study are to assess in a sample of older adults consulting in primary care practices the determinants and quality of life associated with post-traumatic stress syndrome (PTSS). Data used came from a large sample of 1765 community-dwelling older adults who were waiting to receive health services in primary care clinics in the province of Quebec. PTSS was measured with the PTSS scale. Socio-demographic and clinical characteristics were used as potential determinants of PTSS. Quality of life was measured with the EuroQol-5D-3L (EQ-5D-3L) EQ-Visual Analog Scale and the Satisfaction With Your Life Scale. Multivariate logistic and linear regression models were used to study the presence of PTSS and different measures of health-related quality of life and quality of life as a function of study variables. The six-month prevalence of PTSS was 11.0%. PTSS was associated with age, marital status, number of chronic disorders and the presence of an anxiety disorder. PTSS was also associated with the EQ-5D-3L and the Satisfaction with Your Life Scale. PTSS is prevalent in patients consulting in primary care practices. Primary care physicians should be aware that PTSS is also associated with a decrease in quality of life, which can further negatively impact health status.

  17. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  18. Analysis of a large number of clinical studies for breast cancer radiotherapy: estimation of radiobiological parameters for treatment planning

    International Nuclear Information System (INIS)

    Guerrero, M; Li, X Allen

    2003-01-01

    Numerous studies of early-stage breast cancer treated with breast conserving surgery (BCS) and radiotherapy (RT) have been published in recent years. Both external beam radiotherapy (EBRT) and/or brachytherapy (BT) with different fractionation schemes are currently used. The present RT practice is largely based on empirical experience and it lacks a reliable modelling tool to compare different RT modalities or to design new treatment strategies. The purpose of this work is to derive a plausible set of radiobiological parameters that can be used for RT treatment planning. The derivation is based on existing clinical data and is consistent with the analysis of a large number of published clinical studies on early-stage breast cancer. A large number of published clinical studies on the treatment of early breast cancer with BCS plus RT (including whole breast EBRT with or without a boost to the tumour bed, whole breast EBRT alone, brachytherapy alone) and RT alone are compiled and analysed. The linear quadratic (LQ) model is used in the analysis. Three of these clinical studies are selected to derive a plausible set of LQ parameters. The potential doubling time is set a priori in the derivation according to in vitro measurements from the literature. The impact of considering lower or higher T pot is investigated. The effects of inhomogeneous dose distributions are considered using clinically representative dose volume histograms. The derived LQ parameters are used to compare a large number of clinical studies using different regimes (e.g., RT modality and/or different fractionation schemes with different prescribed dose) in order to validate their applicability. The values of the equivalent uniform dose (EUD) and biologically effective dose (BED) are used as a common metric to compare the biological effectiveness of each treatment regime. We have obtained a plausible set of radiobiological parameters for breast cancer. This set of parameters is consistent with in vitro

  19. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae).

    Science.gov (United States)

    Krahulcová, Anna; Trávnícek, Pavel; Krahulec, František; Rejmánek, Marcel

    2017-04-01

    Aesculus L. (horse chestnut, buckeye) is a genus of 12-19 extant woody species native to the temperate Northern Hemisphere. This genus is known for unusually large seeds among angiosperms. While chromosome counts are available for many Aesculus species, only one has had its genome size measured. The aim of this study is to provide more genome size data and analyse the relationship between genome size and seed mass in this genus. Chromosome numbers in root tip cuttings were confirmed for four species and reported for the first time for three additional species. Flow cytometric measurements of 2C nuclear DNA values were conducted on eight species, and mean seed mass values were estimated for the same taxa. The same chromosome number, 2 n = 40, was determined in all investigated taxa. Original measurements of 2C values for seven Aesculus species (eight taxa), added to just one reliable datum for A. hippocastanum , confirmed the notion that the genome size in this genus with relatively large seeds is surprisingly low, ranging from 0·955 pg 2C -1 in A. parviflora to 1·275 pg 2C -1 in A. glabra var. glabra. The chromosome number of 2 n = 40 seems to be conclusively the universal 2 n number for non-hybrid species in this genus. Aesculus genome sizes are relatively small, not only within its own family, Sapindaceae, but also within woody angiosperms. The genome sizes seem to be distinct and non-overlapping among the four major Aesculus clades. These results provide an extra support for the most recent reconstruction of Aesculus phylogeny. The correlation between the 2C values and seed masses in examined Aesculus species is slightly negative and not significant. However, when the four major clades are treated separately, there is consistent positive association between larger genome size and larger seed mass within individual lineages. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For

  20. Evaluation of two sweeping methods for estimating the number of immature Aedes aegypti (Diptera: Culicidae in large containers

    Directory of Open Access Journals (Sweden)

    Margareth Regina Dibo

    2013-07-01

    Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.

  1. Production of large number of water-cooled excitation coils with improved techniques for multipole magnets of INDUS -2

    International Nuclear Information System (INIS)

    Karmarkar, M.G.; Sreeramulu, K.; Kulshreshta, P.K.

    2003-01-01

    Accelerator multipole magnets are characterized by high field gradients powered with relatively high current excitation coils. Due to space limitations in the magnet core/poles, compact coil geometry is also necessary. The coils are made of several insulated turns using hollow copper conductor. High current densities in these require cooling with low conductivity water. Additionally during operation, these are subjected to thermal fatigue stresses. A large number of coils ( Qty: 650 nos.) having different geometries were required for all multipole magnets like quadrupole (QP), sextupole (SP). Improved techniques for winding, insulation and epoxy consolidation were developed in-house at M D Lab and all coils have been successfully made. Improved technology, production techniques adopted for magnet coils and their inspection are briefly discussed in this paper. (author)

  2. CrossRef Large numbers of cold positronium atoms created in laser-selected Rydberg states using resonant charge exchange

    CERN Document Server

    McConnell, R; Kolthammer, WS; Richerme, P; Müllers, A; Walz, J; Grzonka, D; Zielinski, M; Fitzakerley, D; George, MC; Hessels, EA; Storry, CH; Weel, M

    2016-01-01

    Lasers are used to control the production of highly excited positronium atoms (Ps*). The laser light excites Cs atoms to Rydberg states that have a large cross section for resonant charge-exchange collisions with cold trapped positrons. For each trial with 30 million trapped positrons, more than 700 000 of the created Ps* have trajectories near the axis of the apparatus, and are detected using Stark ionization. This number of Ps* is 500 times higher than realized in an earlier proof-of-principle demonstration (2004 Phys. Lett. B 597 257). A second charge exchange of these near-axis Ps* with trapped antiprotons could be used to produce cold antihydrogen, and this antihydrogen production is expected to be increased by a similar factor.

  3. A Theory of Evolving Natural Constants Based on the Unification of General Theory of Relativity and Dirac's Large Number Hypothesis

    International Nuclear Information System (INIS)

    Peng Huanwu

    2005-01-01

    Taking Dirac's large number hypothesis as true, we have shown [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703] the inconsistency of applying Einstein's theory of general relativity with fixed gravitation constant G to cosmology, and a modified theory for varying G is found, which reduces to Einstein's theory outside the gravitating body for phenomena of short duration in small distances, thereby agrees with all the crucial tests formerly supporting Einstein's theory. The modified theory, when applied to the usual homogeneous cosmological model, gives rise to a variable cosmological tensor term determined by the derivatives of G, in place of the cosmological constant term usually introduced ad hoc. Without any free parameter the theoretical Hubble's relation obtained from the modified theory seems not in contradiction to observations, as Dr. Wang's preliminary analysis of the recent data indicates [Commun. Theor. Phys. (Beijing, China) 42 (2004) 703]. As a complement to Commun. Theor. Phys. (Beijing, China) 42 (2004) 703 we shall study in this paper the modification of electromagnetism due to Dirac's large number hypothesis in more detail to show that the approximation of geometric optics still leads to null geodesics for the path of light, and that the general relation between the luminosity distance and the proper geometric distance is still valid in our theory as in Einstein's theory, and give the equations for homogeneous cosmological model involving matter plus electromagnetic radiation. Finally we consider the impact of the modification to quantum mechanics and statistical mechanics, and arrive at a systematic theory of evolving natural constants including Planck's h-bar as well as Boltzmann's k B by finding out their cosmologically combined counterparts with factors of appropriate powers of G that may remain truly constant to cosmologically long time.

  4. A very large number of GABAergic neurons are activated in the tuberal hypothalamus during paradoxical (REM sleep hypersomnia.

    Directory of Open Access Journals (Sweden)

    Emilie Sapin

    Full Text Available We recently discovered, using Fos immunostaining, that the tuberal and mammillary hypothalamus contain a massive population of neurons specifically activated during paradoxical sleep (PS hypersomnia. We further showed that some of the activated neurons of the tuberal hypothalamus express the melanin concentrating hormone (MCH neuropeptide and that icv injection of MCH induces a strong increase in PS quantity. However, the chemical nature of the majority of the neurons activated during PS had not been characterized. To determine whether these neurons are GABAergic, we combined in situ hybridization of GAD(67 mRNA with immunohistochemical detection of Fos in control, PS deprived and PS hypersomniac rats. We found that 74% of the very large population of Fos-labeled neurons located in the tuberal hypothalamus after PS hypersomnia were GAD-positive. We further demonstrated combining MCH immunohistochemistry and GAD(67in situ hybridization that 85% of the MCH neurons were also GAD-positive. Finally, based on the number of Fos-ir/GAD(+, Fos-ir/MCH(+, and GAD(+/MCH(+ double-labeled neurons counted from three sets of double-staining, we uncovered that around 80% of the large number of the Fos-ir/GAD(+ neurons located in the tuberal hypothalamus after PS hypersomnia do not contain MCH. Based on these and previous results, we propose that the non-MCH Fos/GABAergic neuronal population could be involved in PS induction and maintenance while the Fos/MCH/GABAergic neurons could be involved in the homeostatic regulation of PS. Further investigations will be needed to corroborate this original hypothesis.

  5. Association between genetic variation in a region on chromosome 11 and schizophrenia in large samples from Europe

    DEFF Research Database (Denmark)

    Rietschel, M; Mattheisen, M; Degenhardt, F

    2012-01-01

    the recruitment of very large samples of patients and controls (that is tens of thousands), or large, potentially more homogeneous samples that have been recruited from confined geographical areas using identical diagnostic criteria. Applying the latter strategy, we performed a genome-wide association study (GWAS...... between emotion regulation and cognition that is structurally and functionally abnormal in SCZ and bipolar disorder.Molecular Psychiatry advance online publication, 12 July 2011; doi:10.1038/mp.2011.80....

  6. Religion and the Unmaking of Prejudice toward Muslims: Evidence from a Large National Sample

    Science.gov (United States)

    Shaver, John H.; Troughton, Geoffrey; Sibley, Chris G.; Bulbulia, Joseph A.

    2016-01-01

    In the West, anti-Muslim sentiments are widespread. It has been theorized that inter-religious tensions fuel anti-Muslim prejudice, yet previous attempts to isolate sectarian motives have been inconclusive. Factors contributing to ambiguous results are: (1) failures to assess and adjust for multi-level denomination effects; (2) inattention to demographic covariates; (3) inadequate methods for comparing anti-Muslim prejudice relative to other minority group prejudices; and (4) ad hoc theories for the mechanisms that underpin prejudice and tolerance. Here we investigate anti-Muslim prejudice using a large national sample of non-Muslim New Zealanders (N = 13,955) who responded to the 2013 New Zealand Attitudes and Values Study. We address previous shortcomings by: (1) building Bayesian multivariate, multi-level regression models with denominations modeled as random effects; (2) including high-resolution demographic information that adjusts for factors known to influence prejudice; (3) simultaneously evaluating the relative strength of anti-Muslim prejudice by comparing it to anti-Arab prejudice and anti-immigrant prejudice within the same statistical model; and (4) testing predictions derived from the Evolutionary Lag Theory of religious prejudice and tolerance. This theory predicts that in countries such as New Zealand, with historically low levels of conflict, religion will tend to increase tolerance generally, and extend to minority religious groups. Results show that anti-Muslim and anti-Arab sentiments are confounded, widespread, and substantially higher than anti-immigrant sentiments. In support of the theory, the intensity of religious commitments was associated with a general increase in tolerance toward minority groups, including a poorly tolerated religious minority group: Muslims. Results clarify religion’s power to enhance tolerance in peaceful societies that are nevertheless afflicted by prejudice. PMID:26959976

  7. Personality traits and eating habits in a large sample of Estonians.

    Science.gov (United States)

    Mõttus, René; Realo, Anu; Allik, Jüri; Deary, Ian J; Esko, Tõnu; Metspalu, Andres

    2012-11-01

    Diet has health consequences, which makes knowing the psychological correlates of dietary habits important. Associations between dietary habits and personality traits were examined in a large sample of Estonians (N = 1,691) aged between 18 and 89 years. Dietary habits were measured using 11 items, which grouped into two factors reflecting (a) health aware and (b) traditional dietary patterns. The health aware diet factor was defined by eating more cereal and dairy products, fish, vegetables and fruits. The traditional diet factor was defined by eating more potatoes, meat and meat products, and bread. Personality was assessed by participants themselves and by people who knew them well. The questionnaire used was the NEO Personality Inventory-3, which measures the Five-Factor Model personality broad traits of Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness, along with six facets for each trait. Gender, age and educational level were controlled for. Higher scores on the health aware diet factor were associated with lower Neuroticism, and higher Extraversion, Openness and Conscientiousness (effect sizes were modest: r = .11 to 0.17 in self-ratings, and r = .08 to 0.11 in informant-ratings, ps < 0.01 or lower). Higher scores on the traditional diet factor were related to lower levels of Openness (r = -0.14 and -0.13, p < .001, self- and informant-ratings, respectively). Endorsement of healthy and avoidance of traditional dietary items are associated with people's personality trait levels, especially higher Openness. The results may inform dietary interventions with respect to possible barriers to diet change.

  8. Religion and the Unmaking of Prejudice toward Muslims: Evidence from a Large National Sample.

    Science.gov (United States)

    Shaver, John H; Troughton, Geoffrey; Sibley, Chris G; Bulbulia, Joseph A

    2016-01-01

    In the West, anti-Muslim sentiments are widespread. It has been theorized that inter-religious tensions fuel anti-Muslim prejudice, yet previous attempts to isolate sectarian motives have been inconclusive. Factors contributing to ambiguous results are: (1) failures to assess and adjust for multi-level denomination effects; (2) inattention to demographic covariates; (3) inadequate methods for comparing anti-Muslim prejudice relative to other minority group prejudices; and (4) ad hoc theories for the mechanisms that underpin prejudice and tolerance. Here we investigate anti-Muslim prejudice using a large national sample of non-Muslim New Zealanders (N = 13,955) who responded to the 2013 New Zealand Attitudes and Values Study. We address previous shortcomings by: (1) building Bayesian multivariate, multi-level regression models with denominations modeled as random effects; (2) including high-resolution demographic information that adjusts for factors known to influence prejudice; (3) simultaneously evaluating the relative strength of anti-Muslim prejudice by comparing it to anti-Arab prejudice and anti-immigrant prejudice within the same statistical model; and (4) testing predictions derived from the Evolutionary Lag Theory of religious prejudice and tolerance. This theory predicts that in countries such as New Zealand, with historically low levels of conflict, religion will tend to increase tolerance generally, and extend to minority religious groups. Results show that anti-Muslim and anti-Arab sentiments are confounded, widespread, and substantially higher than anti-immigrant sentiments. In support of the theory, the intensity of religious commitments was associated with a general increase in tolerance toward minority groups, including a poorly tolerated religious minority group: Muslims. Results clarify religion's power to enhance tolerance in peaceful societies that are nevertheless afflicted by prejudice.

  9. Relationships between anhedonia, suicidal ideation and suicide attempts in a large sample of physicians.

    Directory of Open Access Journals (Sweden)

    Gwenolé Loas

    Full Text Available The relationships between anhedonia and suicidal ideation or suicide attempts were explored in a large sample of physicians using the interpersonal psychological theory of suicide. We tested two hypotheses: firstly, that there is a significant relationship between anhedonia and suicidality and, secondly, that anhedonia could mediate the relationships between suicidal ideation or suicide attempts and thwarted belongingness or perceived burdensomeness.In a cross-sectional study, 557 physicians filled out several questionnaires measuring suicide risk, depression, using the abridged version of the Beck Depression Inventory (BDI-13, and demographic and job-related information. Ratings of anhedonia, perceived burdensomeness and thwarted belongingness were then extracted from the BDI-13 and the other questionnaires.Significant relationships were found between anhedonia and suicidal ideation or suicide attempts, even when significant variables or covariates were taken into account and, in particular, depressive symptoms. Mediation analyses showed significant partial or complete mediations, where anhedonia mediated the relationships between suicidal ideation (lifetime or recent and perceived burdensomeness or thwarted belongingness. For suicide attempts, complete mediation was found only between anhedonia and thwarted belongingness. When the different components of anhedonia were taken into account, dissatisfaction-not the loss of interest or work inhibition-had significant relationships with suicidal ideation, whereas work inhibition had significant relationships with suicide attempts.Anhedonia and its component of dissatisfaction could be a risk factor for suicidal ideation and could mediate the relationship between suicidal ideation and perceived burdensomeness or thwarted belongingness in physicians. Dissatisfaction, in particular in the workplace, may be explored as a strong predictor of suicidal ideation in physicians.

  10. The suicidality continuum in a large sample of obsessive-compulsive disorder (OCD) patients.

    Science.gov (United States)

    Velloso, P; Piccinato, C; Ferrão, Y; Aliende Perin, E; Cesar, R; Fontenelle, L; Hounie, A G; do Rosário, M C

    2016-10-01

    Obsessive-compulsive disorder (OCD) has a chronic course leading to huge impact in the patient's functioning. Suicidal thoughts and attempts are much more frequent in OCD subjects than once thought before. To empirically investigate whether the suicidal phenomena could be analyzed as a suicidality severity continuum and its association with obsessive-compulsive (OC) symptom dimensions and quality of life (QoL), in a large OCD sample. Cross-sectional study with 548 patients diagnosed with OCD according to the DSM-IV criteria, interviewed in the Brazilian OCD Consortium (C-TOC) sites. Patients were evaluated by OCD experts using standardized instruments including: Yale-Brown Obsessive-Compulsive Scale (YBOCS); Dimensional Yale-Brown Obsessive-Compulsive Scale (DYBOCS); Beck Depression and Anxiety Inventories; Structured Clinical Interview for DSM-IV (SCID); and the SF-36 QoL Health Survey. There were extremely high correlations between all the suicidal phenomena. OCD patients with suicidality had significantly lower QoL, higher severity in the "sexual/religious", "aggression" and "symmetry/ordering" OC symptom dimensions, higher BDI and BA scores and a higher frequency of suicide attempts in a family member. In the regression analysis, the factors that most impacted suicidality were the sexual dimension severity, the SF-36 QoL Mental Health domain, the severity of depressive symptoms and a relative with an attempted suicide history. Suicidality could be analyzed as a severity continuum and patients should be carefully monitored since they present with suicidal ideation. Lower QoL scores, higher scores on the sexual dimension and a family history of suicide attempts should be considered as risk factors for suicidality among OCD patients. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  11. Relationships between anhedonia, suicidal ideation and suicide attempts in a large sample of physicians

    Science.gov (United States)

    Lefebvre, Guillaume; Rotsaert, Marianne; Englert, Yvon

    2018-01-01

    Background The relationships between anhedonia and suicidal ideation or suicide attempts were explored in a large sample of physicians using the interpersonal psychological theory of suicide. We tested two hypotheses: firstly, that there is a significant relationship between anhedonia and suicidality and, secondly, that anhedonia could mediate the relationships between suicidal ideation or suicide attempts and thwarted belongingness or perceived burdensomeness. Methods In a cross-sectional study, 557 physicians filled out several questionnaires measuring suicide risk, depression, using the abridged version of the Beck Depression Inventory (BDI-13), and demographic and job-related information. Ratings of anhedonia, perceived burdensomeness and thwarted belongingness were then extracted from the BDI-13 and the other questionnaires. Results Significant relationships were found between anhedonia and suicidal ideation or suicide attempts, even when significant variables or covariates were taken into account and, in particular, depressive symptoms. Mediation analyses showed significant partial or complete mediations, where anhedonia mediated the relationships between suicidal ideation (lifetime or recent) and perceived burdensomeness or thwarted belongingness. For suicide attempts, complete mediation was found only between anhedonia and thwarted belongingness. When the different components of anhedonia were taken into account, dissatisfaction—not the loss of interest or work inhibition—had significant relationships with suicidal ideation, whereas work inhibition had significant relationships with suicide attempts. Conclusions Anhedonia and its component of dissatisfaction could be a risk factor for suicidal ideation and could mediate the relationship between suicidal ideation and perceived burdensomeness or thwarted belongingness in physicians. Dissatisfaction, in particular in the workplace, may be explored as a strong predictor of suicidal ideation in physicians

  12. BROAD ABSORPTION LINE VARIABILITY ON MULTI-YEAR TIMESCALES IN A LARGE QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Filiz Ak, N.; Brandt, W. N.; Schneider, D. P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario, M3J 1P3 (Canada); Anderson, S. F. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States); Hamann, F. [Department of Astronomy, University of Florida, Gainesville, FL 32611-2055 (United States); Lundgren, B. F. [Department of Astronomy, University of Wisconsin, Madison, WI 53706 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Pâris, I. [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Petitjean, P. [Universite Paris 6, Institut d' Astrophysique de Paris, 75014, Paris (France); Ross, Nicholas P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 92420 (United States); Shen, Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS-51, Cambridge, MA 02138 (United States); York, Don, E-mail: nfilizak@astro.psu.edu [The University of Chicago, Department of Astronomy and Astrophysics, Chicago, IL 60637 (United States)

    2013-11-10

    We present a detailed investigation of the variability of 428 C IV and 235 Si IV broad absorption line (BAL) troughs identified in multi-epoch observations of 291 quasars by the Sloan Digital Sky Survey-I/II/III. These observations primarily sample rest-frame timescales of 1-3.7 yr over which significant rearrangement of the BAL wind is expected. We derive a number of observational results on, e.g., the frequency of BAL variability, the velocity range over which BAL variability occurs, the primary observed form of BAL-trough variability, the dependence of BAL variability upon timescale, the frequency of BAL strengthening versus weakening, correlations between BAL variability and BAL-trough profiles, relations between C IV and Si IV BAL variability, coordinated multi-trough variability, and BAL variations as a function of quasar properties. We assess implications of these observational results for quasar winds. Our results support models where most BAL absorption is formed within an order-of-magnitude of the wind-launching radius, although a significant minority of BAL troughs may arise on larger scales. We estimate an average lifetime for a BAL trough along our line-of-sight of a few thousand years. BAL disappearance and emergence events appear to be extremes of general BAL variability, rather than being qualitatively distinct phenomena. We derive the parameters of a random-walk model for BAL EW variability, finding that this model can acceptably describe some key aspects of EW variability. The coordinated trough variability of BAL quasars with multiple troughs suggests that changes in 'shielding gas' may play a significant role in driving general BAL variability.

  13. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    International Nuclear Information System (INIS)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-01-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging

  14. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  15. DISCOVERY OF A LARGE NUMBER OF CANDIDATE PROTOCLUSTERS TRACED BY ∼15 Mpc-SCALE GALAXY OVERDENSITIES IN COSMOS

    International Nuclear Information System (INIS)

    Chiang, Yi-Kuan; Gebhardt, Karl; Overzier, Roderik

    2014-01-01

    To demonstrate the feasibility of studying the epoch of massive galaxy cluster formation in a more systematic manner using current and future galaxy surveys, we report the discovery of a large sample of protocluster candidates in the 1.62 deg 2 COSMOS/UltraVISTA field traced by optical/infrared selected galaxies using photometric redshifts. By comparing properly smoothed three-dimensional galaxy density maps of the observations and a set of matched simulations incorporating the dominant observational effects (galaxy selection and photometric redshift uncertainties), we first confirm that the observed ∼15 comoving Mpc-scale galaxy clustering is consistent with ΛCDM models. Using further the relation between high-z overdensity and the present day cluster mass calibrated in these matched simulations, we found 36 candidate structures at 1.6 < z < 3.1, showing overdensities consistent with the progenitors of M z = 0 ∼ 10 15 M ☉ clusters. Taking into account the significant upward scattering of lower mass structures, the probabilities for the candidates to have at least M z= 0 ∼ 10 14 M ☉ are ∼70%. For each structure, about 15%-40% of photometric galaxy candidates are expected to be true protocluster members that will merge into a cluster-scale halo by z = 0. With solely photometric redshifts, we successfully rediscover two spectroscopically confirmed structures in this field, suggesting that our algorithm is robust. This work generates a large sample of uniformly selected protocluster candidates, providing rich targets for spectroscopic follow-up and subsequent studies of cluster formation. Meanwhile, it demonstrates the potential for probing early cluster formation with upcoming redshift surveys such as the Hobby-Eberly Telescope Dark Energy Experiment and the Subaru Prime Focus Spectrograph survey

  16. Eosinophils may play regionally disparate roles in influencing IgA(+) plasma cell numbers during large and small intestinal inflammation.

    Science.gov (United States)

    Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J

    2016-05-31

    Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of

  17. Investigating the Variability in Cumulus Cloud Number as a Function of Subdomain Size and Organization using large-domain LES

    Science.gov (United States)

    Neggers, R.

    2017-12-01

    Recent advances in supercomputing have introduced a "grey zone" in the representation of cumulus convection in general circulation models, in which this process is partially resolved. Cumulus parameterizations need to be made scale-aware and scale-adaptive to be able to conceptually and practically deal with this situation. A potential way forward are schemes formulated in terms of discretized Cloud Size Densities, or CSDs. Advantages include i) the introduction of scale-awareness at the foundation of the scheme, and ii) the possibility to apply size-filtering of parameterized convective transport and clouds. The CSD is a new variable that requires closure; this concerns its shape, its range, but also variability in cloud number that can appear due to i) subsampling effects and ii) organization in a cloud field. The goal of this study is to gain insight by means of sub-domain analyses of various large-domain LES realizations of cumulus cloud populations. For a series of three-dimensional snapshots, each with a different degree of organization, the cloud size distribution is calculated in all subdomains, for a range of subdomain sizes. The standard deviation of the number of clouds of a certain size is found to decrease with the subdomain size, following a powerlaw scaling corresponding to an inverse-linear dependence. Cloud number variability also increases with cloud size; this reflects that subsampling affects the largest clouds first, due to their typically larger neighbor spacing. Rewriting this dependence in terms of two dimensionless groups, by dividing by cloud number and cloud size respectively, yields a data collapse. Organization in the cloud field is found to act on top of this primary dependence, by enhancing the cloud number variability at the smaller sizes. This behavior reflects that small clouds start to "live" on top of larger structures such as cold pools, favoring or inhibiting their formation (as illustrated by the attached figure of cloud mask

  18. The necessity of and policy suggestions for implementing a limited number of large scale, fully integrated CCS demonstrations in China

    International Nuclear Information System (INIS)

    Li Zheng; Zhang Dongjie; Ma Linwei; West, Logan; Ni Weidou

    2011-01-01

    CCS is seen as an important and strategic technology option for China to reduce its CO 2 emission, and has received tremendous attention both around the world and in China. Scholars are divided on the role CCS should play, making the future of CCS in China highly uncertain. This paper presents the overall circumstances for CCS development in China, including the threats and opportunities for large scale deployment of CCS, the initial barriers and advantages that China currently possesses, as well as the current progress of CCS demonstration in China. The paper proposes the implementation of a limited number of larger scale, fully integrated CCS demonstration projects and explains the potential benefits that could be garnered. The problems with China's current CCS demonstration work are analyzed, and some targeted policies are proposed based on those observations. These policy suggestions can effectively solve these problems, help China gain the benefits with CCS demonstration soon, and make great contributions to China's big CO 2 reduction mission. - Highlights: → We analyze the overall circumstances for CCS development in China in detail. → China can garner multiple benefits by conducting several large, integrated CCS demos. → We present the current progress in CCS demonstration in China in detail. → Some problems exist with China's current CCS demonstration work. → Some focused policies are suggested to improve CCS demonstration in China.

  19. Fluctuations of nuclear cross sections in the region of strong overlapping resonances and at large number of open channels

    International Nuclear Information System (INIS)

    Kun, S.Yu.

    1985-01-01

    On the basis of the symmetrized Simonius representation of the S matrix statistical properties of its fluctuating component in the presence of direct reactions are investigated. The case is considered where the resonance levels are strongly overlapping and there is a lot of open channels, assuming that compound-nucleus cross sections which couple different channels are equal. It is shown that using the averaged unitarity condition on the real energy axis one can eliminate both resonance-resonance and channel-channel correlations from partial r transition amplitudes. As a result, we derive the basic points of the Epicson fluctuation theory of nuclear cross sections, independently of the relation between the resonance overlapping and the number of open channels, and the validity of the Hauser-Feshbach model is established. If the number of open channels is large, the time of uniform population of compound-nucleus configurations, for an open excited nuclear system, is much smaller than the Poincare time. The life time of compound nucleus is discussed

  20. Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph

    Science.gov (United States)

    Xue, Xiaofeng

    2017-11-01

    In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.

  1. Explaining the large numbers by a hierarchy of ''universes'': a unified theory of strong and gravitational interactions

    International Nuclear Information System (INIS)

    Caldirola, P.; Recami, E.

    1978-01-01

    By assuming covariance of physical laws under (discrete) dilatations, strong and gravitational interactions have been described in a unified way. In terms of the (additional, discrete) ''dilatational'' degree of freedom, our cosmos as well as hadrons can be considered as different states of the same system, or rather as similar systems. Moreover, a discrete hierarchy can be defined of ''universes'' which are governed by force fields with strengths inversely proportional to the ''universe'' radii. Inside each ''universe'' an equivalence principle holds, so that its characteristic field can be geometrized there. It is thus easy to derive a whole ''numerology'', i.e. relations among numbers analogous to the so-called Weyl-Eddington-Dirac ''large numbers''. For instance, the ''Planck mass'' happens to be nothing but the (average) magnitude of the strong charge of the hadron quarks. However, our ''numerology'' connects the (gravitational) macrocosmos with the (strong) microcosmos, rather than with the electromagnetic ones (as, e.g., in Dirac's version). Einstein-type scaled equations (with ''cosmological'' term) are suggested for the hadron interior, which - incidentally - yield a (classical) quark confinement in a very natural way and are compatible with the ''asymptotic freedom''. At last, within a ''bi-scale'' theory, further equations are proposed that provide a priori a classical field theory of strong interactions (between different hadrons). The relevant sections are 5.2, 7 and 8. (author)

  2. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  3. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  4. Deciphering the Correlation between Breast Tumor Samples and Cell Lines by Integrating Copy Number Changes and Gene Expression Profiles

    Directory of Open Access Journals (Sweden)

    Yi Sun

    2015-01-01

    Full Text Available Breast cancer is one of the most common cancers with high incident rate and high mortality rate worldwide. Although different breast cancer cell lines were widely used in laboratory investigations, accumulated evidences have indicated that genomic differences exist between cancer cell lines and tissue samples in the past decades. The abundant molecular profiles of cancer cell lines and tumor samples deposited in the Cancer Cell Line Encyclopedia and The Cancer Genome Atlas now allow a systematical comparison of the breast cancer cell lines with breast tumors. We depicted the genomic characteristics of breast primary tumors based on the copy number variation and gene expression profiles and the breast cancer cell lines were compared to different subgroups of breast tumors. We identified that some of the breast cancer cell lines show high correlation with the tumor group that agrees with previous knowledge, while a big part of them do not, including the most used MCF7, MDA-MB-231, and T-47D. We presented a computational framework to identify cell lines that mostly resemble a certain tumor group for the breast tumor study. Our investigation presents a useful guide to bridge the gap between cell lines and tumors and helps to select the most suitable cell line models for personalized cancer studies.

  5. Noise, sampling, and the number of projections in cone-beam CT with a flat-panel detector

    International Nuclear Information System (INIS)

    Zhao, Z.; Gang, G. J.; Siewerdsen, J. H.

    2014-01-01

    Purpose: To investigate the effect of the number of projection views on image noise in cone-beam CT (CBCT) with a flat-panel detector. Methods: This fairly fundamental consideration in CBCT system design and operation was addressed experimentally (using a phantom presenting a uniform medium as well as statistically motivated “clutter”) and theoretically (using a cascaded systems model describing CBCT noise) to elucidate the contributing factors of quantum noise (σ Q ), electronic noise (σ E ), and view aliasing (σ view ). Analysis included investigation of the noise, noise-power spectrum, and modulation transfer function as a function of the number of projections (N proj ), dose (D tot ), and voxel size (b vox ). Results: The results reveal a nonmonotonic relationship between image noise andN proj at fixed total dose: for the CBCT system considered, noise decreased with increasing N proj due to reduction of view sampling effects in the regime N proj proj due to increased electronic noise. View sampling effects were shown to depend on the heterogeneity of the object in a direct analytical relationship to power-law anatomical clutter of the form κ/f  β —and a general model of individual noise components (σ Q , σ E , and σ view ) demonstrated agreement with measurements over a broad range in N proj , D tot , and b vox . Conclusions: The work elucidates fairly basic elements of CBCT noise in a manner that demonstrates the role of distinct noise components (viz., quantum, electronic, and view sampling noise). For configurations fairly typical of CBCT with a flat-panel detector (FPD), the analysis reveals a “sweet spot” (i.e., minimum noise) in the rangeN proj ∼ 250–350, nearly an order of magnitude lower in N proj than typical of multidetector CT, owing to the relatively high electronic noise in FPDs. The analysis explicitly relates view aliasing and quantum noise in a manner that includes aspects of the object (“clutter”) and imaging chain

  6. Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws

    Science.gov (United States)

    Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.

    2009-04-01

    Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W

  7. Prevalence of suicidal behaviour and associated factors in a large sample of Chinese adolescents.

    Science.gov (United States)

    Liu, X C; Chen, H; Liu, Z Z; Wang, J Y; Jia, C X

    2017-10-12

    Suicidal behaviour is prevalent among adolescents and is a significant predictor of future suicide attempts (SAs) and suicide death. Data on the prevalence and epidemiological characteristics of suicidal behaviour in Chinese adolescents are limited. This study was aimed to examine the prevalence, characteristics and risk factors of suicidal behaviour, including suicidal thought (ST), suicide plan (SP) and SA, in a large sample of Chinese adolescents. This report represents the first wave data of an ongoing longitudinal study, Shandong Adolescent Behavior and Health Cohort. Participants included 11 831 adolescent students from three counties of Shandong, China. The mean age of participants was 15.0 (s.d. = 1.5) and 51% were boys. In November-December 2015, participants completed a structured adolescent health questionnaire, including ST, SP and SA, characteristics of most recent SA, demographics, substance use, hopelessness, impulsivity and internalising and externalising behavioural problems. The lifetime and last-year prevalence rates were 17.6 and 10.7% for ST in males, 23.5 and 14.7% for ST in females, 8.9 and 2.9% for SP in males, 10.7 and 3.8% for SP in females, 3.4 and 1.3% for SA in males, and 4.6 and 1.8% for SA in females, respectively. The mean age of first SA was 12-13 years. Stabbing/cutting was the most common method to attempt suicide. Approximately 24% of male attempters and 16% of female attempters were medically treated. More than 70% of attempters had no preparatory action. Female gender, smoking, drinking, internalising and externalising problems, hopelessness, suicidal history of friends and acquaintances, poor family economic status and poor parental relationship were all significantly associated with increased risk of suicidal behaviour. Suicidal behaviour in Chinese adolescents is prevalent but less than that previously reported in Western peers. While females are more likely to attempt suicide, males are more likely to use lethal methods

  8. Individual differences influence two-digit number processing, but not their analog magnitude processing: a large-scale online study.

    Science.gov (United States)

    Huber, Stefan; Nuerk, Hans-Christoph; Reips, Ulf-Dietrich; Soltanlou, Mojtaba

    2017-12-23

    Symbolic magnitude comparison is one of the most well-studied cognitive processes in research on numerical cognition. However, while the cognitive mechanisms of symbolic magnitude processing have been intensively studied, previous studies have paid less attention to individual differences influencing symbolic magnitude comparison. Employing a two-digit number comparison task in an online setting, we replicated previous effects, including the distance effect, the unit-decade compatibility effect, and the effect of cognitive control on the adaptation to filler items, in a large-scale study in 452 adults. Additionally, we observed that the most influential individual differences were participants' first language, time spent playing computer games and gender, followed by reported alcohol consumption, age and mathematical ability. Participants who used a first language with a left-to-right reading/writing direction were faster than those who read and wrote in the right-to-left direction. Reported playing time for computer games was correlated with faster reaction times. Female participants showed slower reaction times and a larger unit-decade compatibility effect than male participants. Participants who reported never consuming alcohol showed overall slower response times than others. Older participants were slower, but more accurate. Finally, higher grades in mathematics were associated with faster reaction times. We conclude that typical experiments on numerical cognition that employ a keyboard as an input device can also be run in an online setting. Moreover, while individual differences have no influence on domain-specific magnitude processing-apart from age, which increases the decade distance effect-they generally influence performance on a two-digit number comparison task.

  9. Nutritional status and dental caries in a large sample of 4- and 5 ...

    African Journals Online (AJOL)

    Background. Evidence from studies involving small samples of children in Africa, India and South America suggests a higher dental caries rate in malnourished children. A comparison was done to evaluate wasting and stunting and their association with dental caries in four samples of South African children. Design.

  10. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  11. Psychological Predictors of Seeking Help from Mental Health Practitioners among a Large Sample of Polish Young Adults

    Directory of Open Access Journals (Sweden)

    Lidia Perenc

    2016-10-01

    Full Text Available Although the corresponding literature contains a substantial number of studies on the relationship between psychological factors and attitude towards seeking professional psychological help, the role of some determinants remains unexplored, especially among Polish young adults. The present study investigated diversity among a large cohort of Polish university students related to attitudes towards help-seeking and the regulative roles of gender, level of university education, health locus of control and sense of coherence. The total sample comprised 1706 participants who completed the following measures: Attitude Toward Seeking Professional Psychological Help Scale-SF, Multidimensional Health Locus of Control Scale, and Orientation to Life Questionnaire (SOC-29. They were recruited from various university faculties and courses by means of random selection. The findings revealed that, among socio-demographic variables, female gender moderately and graduate of university study strongly predict attitude towards seeking help. Internal locus of control and all domains of sense of coherence are significantly correlated with the scores related to the help-seeking attitude. Attitudes toward psychological help-seeking are significantly related to female gender, graduate university education, internal health locus of control and sense of coherence. Further research must be performed in Poland in order to validate these results in different age and social groups.

  12. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    Science.gov (United States)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  13. The application of the central limit theorem and the law of large numbers to facial soft tissue depths: T-Table robustness and trends since 2008.

    Science.gov (United States)

    Stephan, Carl N

    2014-03-01

    By pooling independent study means (x¯), the T-Tables use the central limit theorem and law of large numbers to average out study-specific sampling bias and instrument errors and, in turn, triangulate upon human population means (μ). Since their first publication in 2008, new data from >2660 adults have been collected (c.30% of the original sample) making a review of the T-Table's robustness timely. Updated grand means show that the new data have negligible impact on the previously published statistics: maximum change = 1.7 mm at gonion; and ≤1 mm at 93% of all landmarks measured. This confirms the utility of the 2008 T-Table as a proxy to soft tissue depth population means and, together with updated sample sizes (8851 individuals at pogonion), earmarks the 2013 T-Table as the premier mean facial soft tissue depth standard for craniofacial identification casework. The utility of the T-Table, in comparison with shorths and 75-shormaxes, is also discussed. © 2013 American Academy of Forensic Sciences.

  14. Catering for large numbers of tourists: the McDonaldization of casual dining in Kruger National Park

    Directory of Open Access Journals (Sweden)

    Ferreira Sanette L.A.

    2016-09-01

    Full Text Available Since 2002 Kruger National Park (KNP has subjected to a commercialisation strategy. Regarding income generation, SANParks (1 sees KNP as the goose that lays the golden eggs. As part of SANParks’ commercialisation strategy and in response to providing services that are efficient, predictable and calculable for a large number of tourists, SANParks has allowed well-known branded restaurants to be established in certain rest camps in KNP. This innovation has raised a range of different concerns and opinions among the public. This paper investigates the what and the where of casual dining experiences in KNP; describes how the catering services have evolved over the last 70 years; and evaluates current visitor perceptions of the introduction of franchised restaurants in the park. The main research instrument was a questionnaire survey. Survey findings confirmed that restaurant managers, park managers and visitors recognise franchised restaurants as positive contributors to the unique KNP experience. Park managers appraised the franchised restaurants as mechanisms for funding conservation.

  15. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies

    Science.gov (United States)

    2014-01-01

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients’ experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team’s reflexive statements to illustrate the development of our methods. PMID:24951054

  16. Meta-ethnography 25 years on: challenges and insights for synthesising a large number of qualitative studies.

    Science.gov (United States)

    Toye, Francine; Seers, Kate; Allcock, Nick; Briggs, Michelle; Carr, Eloise; Barker, Karen

    2014-06-21

    Studies that systematically search for and synthesise qualitative research are becoming more evident in health care, and they can make an important contribution to patient care. Our team was funded to complete a meta-ethnography of patients' experience of chronic musculoskeletal pain. It has been 25 years since Noblit and Hare published their core text on meta-ethnography, and the current health research environment brings additional challenges to researchers aiming to synthesise qualitative research. Noblit and Hare propose seven stages of meta-ethnography which take the researcher from formulating a research idea to expressing the findings. These stages are not discrete but form part of an iterative research process. We aimed to build on the methods of Noblit and Hare and explore the challenges of including a large number of qualitative studies into a qualitative systematic review. These challenges hinge upon epistemological and practical issues to be considered alongside expectations about what determines high quality research. This paper describes our method and explores these challenges. Central to our method was the process of collaborative interpretation of concepts and the decision to exclude original material where we could not decipher a concept. We use excerpts from our research team's reflexive statements to illustrate the development of our methods.

  17. Attenuation of contaminant plumes in homogeneous aquifers: Sensitivity to source function at moderate to large peclet numbers

    International Nuclear Information System (INIS)

    Selander, W.N.; Lane, F.E.; Rowat, J.H.

    1995-05-01

    A groundwater mass transfer calculation is an essential part of the performance assessment for radioactive waste disposal facilities. AECL's IRUS (Intrusion Resistant Underground Structure) facility, which is designed for the near-surface disposal of low-level radioactive waste (LLRW), is to be situated in the sandy overburden at AECL's Chalk River Laboratories. Flow in the sandy aquifers at the proposed IRUS site is relatively homogeneous and advection-dominated (large Peclet numbers). Mass transfer along the mean direction of flow from the IRUS site may be described using the one-dimensional advection-dispersion equation, for which a Green's function representation of downstream radionuclide flux is convenient. This report shows that in advection-dominated aquifers, dispersive attenuation of initial contaminant releases depends principally on two time scales: the source duration and the pulse breakthrough time. Numerical investigation shows further that the maximum downstream flux or concentration depends on these time scales in a simple characteristic way that is minimally sensitive to the shape of the initial source pulse. (author). 11 refs., 2 tabs., 3 figs

  18. Large Eddy Simulation study of the development of finite-channel lock-release currents at high Grashof numbers

    Science.gov (United States)

    Ooi, Seng-Keat

    2005-11-01

    Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.

  19. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    Science.gov (United States)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  20. Application of Conventional and K0-Based Internal Monostandard NAA Using Reactor Neutrons for Compositional Analysis of Large Samples

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.; Swain, K. K.; Pujari, P.K.

    2018-01-01

    Large sample neutron activation analysis (LSNAA) work was carried out for samples of coal, uranium ore, stainless steel, ancient and new clay potteries, dross and clay pottery replica from Peru using low flux high thermalized irradiation sites. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated using thermal column (TC) facility of Apsara reactor as well as graphite reflector position of critical facility (CF) at Bhabha Atomic Research Centre, Mumbai. Small size (10 - 500 mg) samples were also irradiated at core position of Apsara reactor, pneumatic carrier facility (PCF) of Dhruva reactor and pneumatic fast transfer facility (PFTS) of KAMINI reactor. Irradiation positions were characterized using indium flux monitor for TC and CF whereas multi monitors were used at other positions. Radioactive assay was carried out using high resolution gamma ray spectrometry. The k0-based internal monostandard NAA (IM-NAA) method was used to determine elemental concentration ratios with respect to Na in coal and uranium ore samples, Sc in pottery samples and Fe in stainless steel. Insitu relative detection efficiency for each irradiated sample was obtained using γ rays of activation products in the required energy range. Representative sample sizes were arrived at for coal and uranium ore from the plots of La/Na ratios as a function of the mass of the sample. For stainless steel sample of SS 304L, the absolute concentrations were calculated from concentration ratios by mass balance approach since all the major elements (Fe, Cr, Ni and Mn) were amenable to NAA. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. The La to Ce concentration ratios were used for preliminary grouping and concentration ratios of 15 elements with respect to Sc were used by statistical cluster analysis for confirmation of grouping. Concentrations of Au and Ag were determined in not so

  1. Elemental mapping of large samples by external ion beam analysis with sub-millimeter resolution and its applications

    Science.gov (United States)

    Silva, T. F.; Rodrigues, C. L.; Added, N.; Rizzutto, M. A.; Tabacniks, M. H.; Mangiarotti, A.; Curado, J. F.; Aguirre, F. R.; Aguero, N. F.; Allegro, P. R. P.; Campos, P. H. O. V.; Restrepo, J. M.; Trindade, G. F.; Antonio, M. R.; Assis, R. F.; Leite, A. R.

    2018-05-01

    The elemental mapping of large areas using ion beam techniques is a desired capability for several scientific communities, involved on topics ranging from geoscience to cultural heritage. Usually, the constraints for large-area mapping are not met in setups employing micro- and nano-probes implemented all over the world. A novel setup for mapping large sized samples in an external beam was recently built at the University of São Paulo employing a broad MeV-proton probe with sub-millimeter dimension, coupled to a high-precision large range XYZ robotic stage (60 cm range in all axis and precision of 5 μ m ensured by optical sensors). An important issue on large area mapping is how to deal with the irregularities of the sample's surface, that may introduce artifacts in the images due to the variation of the measuring conditions. In our setup, we implemented an automatic system based on machine vision to correct the position of the sample to compensate for its surface irregularities. As an additional benefit, a 3D digital reconstruction of the scanned surface can also be obtained. Using this new and unique setup, we have produced large-area elemental maps of ceramics, stones, fossils, and other sort of samples.

  2. Estimating parameters of neutral communities : From one single large to several small samples

    NARCIS (Netherlands)

    Munoz, Francois; Couteron, Pierre; Ramesh, B. R.; Etienne, Rampal S.

    2007-01-01

    The neutral theory of S. P. Hubbell postulates a two-scale hierarchical framework consisting of a metacommunity following the speciation - drift equilibrium characterized by the "biodiversity number'' theta, and local communities following the migration - drift equilibrium characterized by the

  3. Large-scale prospective T cell function assays in shipped, unfrozen blood samples

    DEFF Research Database (Denmark)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J

    2014-01-01

    , for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within...... cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities...... North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T...

  4. Human blood RNA stabilization in samples collected and transported for a large biobank

    Science.gov (United States)

    2012-01-01

    Background The Norwegian Mother and Child Cohort Study (MoBa) is a nation-wide population-based pregnancy cohort initiated in 1999, comprising more than 108.000 pregnancies recruited between 1999 and 2008. In this study we evaluated the feasibility of integrating RNA analyses into existing MoBa protocols. We compared two different blood RNA collection tube systems – the PAXgene™ Blood RNA system and the Tempus™ Blood RNA system - and assessed the effects of suboptimal blood volumes in collection tubes and of transportation of blood samples by standard mail. Endpoints to characterize the samples were RNA quality and yield, and the RNA transcript stability of selected genes. Findings High-quality RNA could be extracted from blood samples stabilized with both PAXgene and Tempus tubes. The RNA yields obtained from the blood samples collected in Tempus tubes were consistently higher than from PAXgene tubes. Higher RNA yields were obtained from cord blood (3 – 4 times) compared to adult blood with both types of tubes. Transportation of samples by standard mail had moderate effects on RNA quality and RNA transcript stability; the overall RNA quality of the transported samples was high. Some unexplained changes in gene expression were noted, which seemed to correlate with suboptimal blood volumes collected in the tubes. Temperature variations during transportation may also be of some importance. Conclusions Our results strongly suggest that special collection tubes are necessary for RNA stabilization and they should be used for establishing new biobanks. We also show that the 50,000 samples collected in the MoBa biobank provide RNA of high quality and in sufficient amounts to allow gene expression analyses for studying the association of disease with altered patterns of gene expression. PMID:22988904

  5. Email-Based Informed Consent: Innovative Method for Reaching Large Numbers of Subjects for Data Mining Research

    Science.gov (United States)

    Lee, Lesley R.; Mason, Sara S.; Babiak-Vazquez, Adriana; Ray, Stacie L.; Van Baalen, Mary

    2015-01-01

    Since the 2010 NASA authorization to make the Life Sciences Data Archive (LSDA) and Lifetime Surveillance of Astronaut Health (LSAH) data archives more accessible by the research and operational communities, demand for data has greatly increased. Correspondingly, both the number and scope of requests have increased, from 142 requests fulfilled in 2011 to 224 in 2014, and with some datasets comprising up to 1 million data points. To meet the demand, the LSAH and LSDA Repositories project was launched, which allows active and retired astronauts to authorize full, partial, or no access to their data for research without individual, study-specific informed consent. A one-on-one personal informed consent briefing is required to fully communicate the implications of the several tiers of consent. Due to the need for personal contact to conduct Repositories consent meetings, the rate of consenting has not kept up with demand for individualized, possibly attributable data. As a result, other methods had to be implemented to allow the release of large datasets, such as release of only de-identified data. However the compilation of large, de-identified data sets places a significant resource burden on LSAH and LSDA and may result in diminished scientific usefulness of the dataset. As a result, LSAH and LSDA worked with the JSC Institutional Review Board Chair, Astronaut Office physicians, and NASA Office of General Counsel personnel to develop a "Remote Consenting" process for retrospective data mining studies. This is particularly useful since the majority of the astronaut cohort is retired from the agency and living outside the Houston area. Originally planned as a method to send informed consent briefing slides and consent forms only by mail, Remote Consenting has evolved into a means to accept crewmember decisions on individual studies via their method of choice: email or paper copy by mail. To date, 100 emails have been sent to request participation in eight HRP

  6. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students

    OpenAIRE

    Song Wang; Ming Zhou; Taolin Chen; Xun Yang; Guangxiang Chen; Meiyun Wang; Qiyong Gong

    2017-01-01

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphome...

  7. Large-volume injection of sample diluents not miscible with the mobile phase as an alternative approach in sample preparation for bioanalysis: an application for fenspiride bioequivalence.

    Science.gov (United States)

    Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor

    2011-09-01

    Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.

  8. Social class and (un)ethical behavior : A framework, with evidence from a large population sample

    NARCIS (Netherlands)

    Trautmann, S.T.; van de Kuilen, G.; Zeckhauser, R.J.

    2013-01-01

    Differences in ethical behavior between members of the upper and lower classes have been at the center of civic debates in recent years. In this article, we present a framework for understanding how class affects ethical standards and behaviors. We apply the framework using data from a large Dutch

  9. Consistent associations between measures of psychological stress and CMV antibody levels in a large occupational sample

    NARCIS (Netherlands)

    Rector, J.L.; Dowd, J.B.; Loerbroks, A.; Burns, V.E.; Moss, P.A.; Jarczok, M.N.; Stalder, T.; Hoffman, K.; Fischer, J.E.; Bosch, J.A.

    2014-01-01

    Cytomegalovirus (CMV) is a herpes virus that has been implicated in biological aging and impaired health. Evidence, largely accrued from small-scale studies involving select populations, suggests that stress may promote non-clinical reactivation of this virus. However, absent is evidence from larger

  10. Estimating parameters of neutral communities: From one single large to several small samples

    NARCIS (Netherlands)

    Munoz, F.; Couteron, P.; Ramesh, B.R.; Etienne, R.S.

    2007-01-01

    The neutral theory of S. P. Hubbell postulates a two-scale hierarchical framework consisting of a metacommunity following the speciation¿drift equilibrium characterized by the ``biodiversity number¿¿ h, and local communities following the migration¿drift equilibrium characterized by the ``migration

  11. Small on the Left, Large on the Right: Numbers Orient Visual Attention onto Space in Preverbal Infants

    Science.gov (United States)

    Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola

    2016-01-01

    Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…

  12. An individual urinary proteome analysis in normal human beings to define the minimal sample number to represent the normal urinary proteome

    Directory of Open Access Journals (Sweden)

    Liu Xuejiao

    2012-11-01

    Full Text Available Abstract Background The urinary proteome has been widely used for biomarker discovery. A urinary proteome database from normal humans can provide a background for discovery proteomics and candidate proteins/peptides for targeted proteomics. Therefore, it is necessary to define the minimum number of individuals required for sampling to represent the normal urinary proteome. Methods In this study, inter-individual and inter-gender variations of urinary proteome were taken into consideration to achieve a representative database. An individual analysis was performed on overnight urine samples from 20 normal volunteers (10 males and 10 females by 1DLC/MS/MS. To obtain a representative result of each sample, a replicate 1DLCMS/MS analysis was performed. The minimal sample number was estimated by statistical analysis. Results For qualitative analysis, less than 5% of new proteins/peptides were identified in a male/female normal group by adding a new sample when the sample number exceeded nine. In addition, in a normal group, the percentage of newly identified proteins/peptides was less than 5% upon adding a new sample when the sample number reached 10. Furthermore, a statistical analysis indicated that urinary proteomes from normal males and females showed different patterns. For quantitative analysis, the variation of protein abundance was defined by spectrum count and western blotting methods. And then the minimal sample number for quantitative proteomic analysis was identified. Conclusions For qualitative analysis, when considering the inter-individual and inter-gender variations, the minimum sample number is 10 and requires a balanced number of males and females in order to obtain a representative normal human urinary proteome. For quantitative analysis, the minimal sample number is much greater than that for qualitative analysis and depends on the experimental methods used for quantification.

  13. Water pollution screening by large-volume injection of aqueous samples and application to GC/MS analysis of a river Elbe sample

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, S.; Efer, J.; Engewald, W. [Leipzig Univ. (Germany). Inst. fuer Analytische Chemie

    1997-03-01

    The large-volume sampling of aqueous samples in a programmed temperature vaporizer (PTV) injector was used successfully for the target and non-target analysis of real samples. In this still rarely applied method, e.g., 1 mL of the water sample to be analyzed is slowly injected direct into the PTV. The vaporized water is eliminated through the split vent. The analytes are concentrated onto an adsorbent inside the insert and subsequently thermally desorbed. The capability of the method is demonstrated using a sample from the river Elbe. By means of coupling this method with a mass selective detector in SIM mode (target analysis) the method allows the determination of pollutants in the concentration range up to 0.01 {mu}g/L. Furthermore, PTV enrichment is an effective and time-saving method for non-target analysis in SCAN mode. In a sample from the river Elbe over 20 compounds were identified. (orig.) With 3 figs., 2 tabs.

  14. Large-sample neutron activation analysis in mass balance and nutritional studies

    NARCIS (Netherlands)

    van de Wiel, A.; Blaauw, Menno

    2018-01-01

    Low concentrations of elements in food can be measured with various techniques, mostly in small samples (mg). These techniques provide only reliable data when the element is distributed homogeneously in the material to be analysed either naturally or after a homogenisation procedure. When this is

  15. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    . We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  16. Predictive Value of Callous-Unemotional Traits in a Large Community Sample

    Science.gov (United States)

    Moran, Paul; Rowe, Richard; Flach, Clare; Briskman, Jacqueline; Ford, Tamsin; Maughan, Barbara; Scott, Stephen; Goodman, Robert

    2009-01-01

    Objective: Callous-unemotional (CU) traits in children and adolescents are increasingly recognized as a distinctive dimension of prognostic importance in clinical samples. Nevertheless, comparatively little is known about the longitudinal effects of these personality traits on the mental health of young people from the general population. Using a…

  17. Evaluating hypotheses in geolocation on a very large sample of Twitter

    DEFF Research Database (Denmark)

    Salehi, Bahar; Søgaard, Anders

    2017-01-01

    Recent work in geolocation has madeseveral hypotheses about what linguisticmarkers are relevant to detect where peoplewrite from. In this paper, we examinesix hypotheses against a corpus consistingof all geo-tagged tweets from theUS, or whose geo-tags could be inferred,in a 19% sample of Twitter...

  18. CASP10-BCL::Fold efficiently samples topologies of large proteins.

    Science.gov (United States)

    Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens

    2015-03-01

    During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.

  19. Comparing rapid methods for detecting Listeria in seafood and environmental samples using the most probably number (MPN) technique.

    Science.gov (United States)

    Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C

    2012-02-15

    The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  1. Sample preparation and analysis of large 238PuO2 and ThO2 spheres

    International Nuclear Information System (INIS)

    Wise, R.L.; Selle, J.E.

    1975-01-01

    A program was initiated to determine the density gradient across a large spherical 238 PuO 2 sample produced by vacuum hot pressing. Due to the high thermal output of the ceramic a thin section was necessary to prevent overheating of the plastic mount. Techniques were developed for cross sectioning, mounting, grinding, and polishing of the sample. The polished samples were then analyzed on a quantitative image analyzer to determine the density as a function of location across the sphere. The techniques for indexing, analyzing, and reducing the data are described. Typical results obtained on a ThO 2 simulant sphere are given

  2. Sampling design in large-scale vegetation studies: Do not sacrifice ecological thinking to statistical purism!

    Czech Academy of Sciences Publication Activity Database

    Roleček, J.; Chytrý, M.; Hájek, Michal; Lvončík, S.; Tichý, L.

    2007-01-01

    Roč. 42, - (2007), s. 199-208 ISSN 1211-9520 R&D Projects: GA AV ČR IAA6163303; GA ČR(CZ) GA206/05/0020 Grant - others:GA AV ČR(CZ) KJB601630504 Institutional research plan: CEZ:AV0Z60050516 Keywords : Ecological methodology * Large-scale vegetation patterns * Macroecology Subject RIV: EF - Botanics Impact factor: 1.133, year: 2007

  3. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Directory of Open Access Journals (Sweden)

    Fanni Bányai

    Full Text Available Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD. Using the Bergen Social Media Addiction Scale (BSMAS and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  4. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  5. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Science.gov (United States)

    Bányai, Fanni; Zsila, Ágnes; Király, Orsolya; Maraz, Aniko; Elekes, Zsuzsanna; Griffiths, Mark D; Andreassen, Cecilie Schou; Demetrovics, Zsolt

    2017-01-01

    Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic) use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD). Using the Bergen Social Media Addiction Scale (BSMAS) and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  6. Sampling methods and non-destructive examination techniques for large radioactive waste packages

    International Nuclear Information System (INIS)

    Green, T.H.; Smith, D.L.; Burgoyne, K.E.; Maxwell, D.J.; Norris, G.H.; Billington, D.M.; Pipe, R.G.; Smith, J.E.; Inman, C.M.

    1992-01-01

    Progress is reported on work undertaken to evaluate quality checking methods for radioactive wastes. A sampling rig was designed, fabricated and used to develop techniques for the destructive sampling of cemented simulant waste using remotely operated equipment. An engineered system for the containment of cooling water was designed and manufactured and successfully demonstrated with the drum and coring equipment mounted in both vertical and horizontal orientations. The preferred in-cell orientation was found to be with the drum and coring machinery mounted in a horizontal position. Small powdered samples can be taken from cemented homogeneous waste cores using a hollow drill/vacuum section technique with the preferred subsampling technique being to discard the outer 10 mm layer to obtain a representative sample of the cement core. Cement blends can be dissolved using fusion techniques and the resulting solutions are stable to gelling for periods in excess of one year. Although hydrochloric acid and nitric acid are promising solvents for dissolution of cement blends, the resultant solutions tend to form silicic acid gels. An estimate of the beta-emitter content of cemented waste packages can be obtained by a combination of non-destructive and destructive techniques. The errors will probably be in excess of +/-60 % at the 95 % confidence level. Real-time X-ray video-imaging techniques have been used to analyse drums of uncompressed, hand-compressed, in-drum compacted and high-force compacted (i.e. supercompacted) simulant waste. The results have confirmed the applicability of this technique for NDT of low-level waste. 8 refs., 12 figs., 3 tabs

  7. Calibration of UFBC counters and their performance in the assay of large mass plutonium samples

    International Nuclear Information System (INIS)

    Verrecchia, G.P.D.; Smith, B.G.R.; Cranston, R.

    1991-01-01

    This paper reports on the cross-calibration of four Universal Fast Breeder reactor assembly coincidence (UFBC) counters using multi-can containers of Plutonium oxide powders with masses between 2 and 12 Kg of plutonium and a parametric study on the sensitivity of the detector response to the positioning or removal and substitution of the material with empty cans. The paper also reports on the performance of the UFBC for routine measurements on large mass, multi-can containers of plutonium oxide powders and compares the results to experience previously obtained in the measurement of fast reactor type fuel assemblies in the mass range 2 to 16 Kg of plutonium

  8. Maternal bereavement and childhood asthma-analyses in two large samples of Swedish children.

    Directory of Open Access Journals (Sweden)

    Fang Fang

    Full Text Available Prenatal factors such as prenatal psychological stress might influence the development of childhood asthma.We assessed the association between maternal bereavement shortly before and during pregnancy, as a proxy for prenatal stress, and the risk of childhood asthma in the offspring, based on two samples of children 1-4 (n = 426,334 and 7-12 (n = 493,813 years assembled from the Swedish Medical Birth Register. Exposure was maternal bereavement of a close relative from one year before pregnancy to child birth. Asthma event was defined by a hospital contact for asthma or at least two dispenses of inhaled corticosteroids or montelukast. In the younger sample we calculated hazards ratios (HRs of a first-ever asthma event using Cox models and in the older sample odds ratio (ORs of an asthma attack during 12 months using logistic regression. Compared to unexposed boys, exposed boys seemed to have a weakly higher risk of first-ever asthma event at 1-4 years (HR: 1.09; 95% confidence interval [CI]: 0.98, 1.22 as well as an asthma attack during 12 months at 7-12 years (OR: 1.10; 95% CI: 0.96, 1.24. No association was suggested for girls. Boys exposed during the second trimester had a significantly higher risk of asthma event at 1-4 years (HR: 1.55; 95% CI: 1.19, 2.02 and asthma attack at 7-12 years if the bereavement was an older child (OR: 1.58; 95% CI: 1.11, 2.25. The associations tended to be stronger if the bereavement was due to a traumatic death compared to natural death, but the difference was not statistically significant.Our results showed some evidence for a positive association between prenatal stress and childhood asthma among boys but not girls.

  9. Test in a beam of large-area Micromegas chambers for sampling calorimetry

    CERN Document Server

    Adloff, C.; Dalmaz, A.; Drancourt, C.; Gaglione, R.; Geffroy, N.; Jacquemier, J.; Karyotakis, Y.; Koletsou, I.; Peltier, F.; Samarati, J.; Vouters, G.

    2014-06-11

    Application of Micromegas for sampling calorimetry puts specific constraints on the design and performance of this gaseous detector. In particular, uniform and linear response, low noise and stability against high ionisation density deposits are prerequisites to achieving good energy resolution. A Micromegas-based hadronic calorimeter was proposed for an application at a future linear collider experiment and three technologically advanced prototypes of 1$\\times$1 m$^{2}$ were constructed. Their merits relative to the above-mentioned criteria are discussed on the basis of measurements performed at the CERN SPS test-beam facility.

  10. Smoking and intention to quit among a large sample of black sexual and gender minorities.

    Science.gov (United States)

    Jordan, Jenna N; Everett, Kevin D; Ge, Bin; McElroy, Jane A

    2015-01-01

    The purpose of this study is to more completely quantify smoking and intention to quit from a sample of sexual and gender minority (SGM) Black individuals (N = 639) through analysis of data collected at Pride festivals and online. Frequencies described demographic characteristics; chi-square analyses were used to compare tobacco-related variables. Black SGM smokers were more likely to be trying to quit smoking than White SGM smokers. However, Black SGM individuals were less likely than White SGM individuals to become former smokers. The results of this study indicate that smoking behaviors may be heavily influenced by race after accounting for SGM status.

  11. Methods of pre-concentration of radionuclides from large volume samples

    International Nuclear Information System (INIS)

    Olahova, K.; Matel, L.; Rosskopfova, O.

    2006-01-01

    The development of radioanalytical methods for low level radionuclides in environmental samples is presented. In particular, emphasis is placed on the introduction of extraction chromatography as a tool for improving the quality of results as well as reducing the analysis time. However, the advantageous application of extraction chromatography often depends on the effective use of suitable preconcentration techniques, such as co-precipitation, to reduce the amount of matrix components which accompany the analysis interest. On-going investigations in this field relevant to the determination of environmental levels of actinides and 90 Sr are discussed. (authors)

  12. Reinforced dynamics for enhanced sampling in large atomic and molecular systems

    Science.gov (United States)

    Zhang, Linfeng; Wang, Han; E, Weinan

    2018-03-01

    A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.

  13. Retrieval of very large numbers of items in the Web of Science: an exercise to develop accurate search strategies

    NARCIS (Netherlands)

    Arencibia-Jorge, R.; Leydesdorff, L.; Chinchilla-Rodríguez, Z.; Rousseau, R.; Paris, S.W.

    2009-01-01

    The Web of Science interface counts at most 100,000 retrieved items from a single query. If the query results in a dataset containing more than 100,000 items the number of retrieved items is indicated as >100,000. The problem studied here is how to find the exact number of items in a query that

  14. Characteristics of Beverage Consumption Habits among a Large Sample of French Adults: Associations with Total Water and Energy Intakes

    Directory of Open Access Journals (Sweden)

    Fabien Szabo de Edelenyi

    2016-10-01

    Full Text Available Background: Adequate hydration is a key factor for correct functioning of both cognitive and physical processes. In France, public health recommendations about adequate total water intake (TWI only state that fluid intake should be sufficient, with particular attention paid to hydration for seniors, especially during heatwave periods. The objective of this study was to calculate the total amount of water coming from food and beverages and to analyse characteristics of consumption in participants from a large French national cohort. Methods: TWI, as well as contribution of food and beverages to TWI was assessed among 94,939 adult participants in the Nutrinet-Santé cohort (78% women, mean age 42.9 (SE 0.04 using three 24-h dietary records at baseline. Statistical differences in water intakes across age groups, seasons and day of the week were assessed. Results: The mean TWI was 2.3 L (Standard Error SE 4.7 for men and 2.1 L (SE 2.4 for women. A majority of the sample did comply with the European Food Safety Authority (EFSA adequate intake recommendation, especially women. Mean total energy intake (EI was 1884 kcal/day (SE 1.5 (2250 kcal/day (SE 3.6 for men and 1783 kcal/day (SE 1.5 for women. The contribution to the total EI from beverages was 8.3%. Water was the most consumed beverage, followed by hot beverages. The variety score, defined as the number of different categories of beverages consumed during the three 24-h records out of a maximum of 8, was positively correlated with TWI (r = 0.4; and with EI (r = 0.2, suggesting that beverage variety is an indicator of higher consumption of food and drinks. We found differences in beverage consumptions and water intakes according to age and seasonality. Conclusions: The present study gives an overview of the water intake characteristics in a large population of French adults. TWI was found to be globally in line with public health recommendations.

  15. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  16. Patient-reported causes of heart failure in a large European sample

    DEFF Research Database (Denmark)

    Timmermans, Ivy; Denollet, Johan; Pedersen, Susanne S.

    2018-01-01

    ), psychosocial (35%, mainly (work-related) stress), and natural causes (32%, mainly heredity). There were socio-demographic, clinical and psychological group differences between the various categories, and large discrepancies between prevalence of physical risk factors according to medical records and patient...... distress (OR = 1.54, 95% CI = 0.94–2.51, p = 0.09), and behavioral causes and a less threatening view of heart failure (OR = 0.64, 95% CI = 0.40–1.01, p = 0.06). Conclusion: European patients most frequently reported comorbidities, smoking, stress, and heredity as heart failure causes, but their causal......Background: Patients diagnosed with chronic diseases develop perceptions about their disease and its causes, which may influence health behavior and emotional well-being. This is the first study to examine patient-reported causes and their correlates in patients with heart failure. Methods...

  17. A non-destructive technique for assigning effective atomic number to scientific samples by scattering of 59.54 keV gamma photons

    International Nuclear Information System (INIS)

    Singh, M.P.; Sharma, Amandeep; Singh, Bhajan; Sandhu, B.S.

    2010-01-01

    The objective of present experiment, employing a scattering of 59.54 keV gamma photons, is to assign effective atomic number (Z eff ) to scientific samples (rare earths) of known composition. An HPGe semiconductor detector, placed at 90 o to the incident beam, detects gamma photons scattered from the sample under investigation. The experiment is performed on various elements with atomic number satisfying, 6≤Z≤82, for 59.54 keV incident photons. The intensity ratio of Rayleigh to Compton scattered peaks, corrected for photo-peak efficiency of gamma detector and absorption of photons in the sample and air, is plotted as a function of atomic number and constituted a best fit-curve. From this fit-curve, the respective effective atomic numbers to samples of rare earths are determined. The agreement of measured values of Z eff with theoretical calculations is quite satisfactory.

  18. Insights into a spatially embedded social network from a large-scale snowball sample

    Science.gov (United States)

    Illenberger, J.; Kowald, M.; Axhausen, K. W.; Nagel, K.

    2011-12-01

    Much research has been conducted to obtain insights into the basic laws governing human travel behaviour. While the traditional travel survey has been for a long time the main source of travel data, recent approaches to use GPS data, mobile phone data, or the circulation of bank notes as a proxy for human travel behaviour are promising. The present study proposes a further source of such proxy-data: the social network. We collect data using an innovative snowball sampling technique to obtain details on the structure of a leisure-contacts network. We analyse the network with respect to its topology, the individuals' characteristics, and its spatial structure. We further show that a multiplication of the functions describing the spatial distribution of leisure contacts and the frequency of physical contacts results in a trip distribution that is consistent with data from the Swiss travel survey.

  19. The presentation and preliminary validation of KIWEST using a large sample of Norwegian university staff.

    Science.gov (United States)

    Innstrand, Siw Tone; Christensen, Marit; Undebakke, Kirsti Godal; Svarva, Kyrre

    2015-12-01

    The aim of the present paper is to present and validate a Knowledge-Intensive Work Environment Survey Target (KIWEST), a questionnaire developed for assessing the psychosocial factors among people in knowledge-intensive work environments. The construct validity and reliability of the measurement model where tested on a representative sample of 3066 academic and administrative staff working at one of the largest universities in Norway. Confirmatory factor analysis provided initial support for the convergent validity and internal consistency of the 30 construct KIWEST measurement model. However, discriminant validity tests indicated that some of the constructs might overlap to some degree. Overall, the KIWEST measure showed promising psychometric properties as a psychosocial work environment measure. © 2015 the Nordic Societies of Public Health.

  20. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  1. STUDY OF HOME DEMONSTRATION UNITS IN A SAMPLE OF 27 COUNTIES IN NEW YORK STATE, NUMBER 3.

    Science.gov (United States)

    ALEXANDER, FRANK D.; HARSHAW, JEAN

    AN EXPLORATORY STUDY EXAMINED CHARACTERISTICS OF 1,128 HOME DEMONSTRATION UNITS TO SUGGEST HYPOTHESES AND SCOPE FOR A MORE INTENSIVE STUDY OF A SMALL SAMPLE OF UNITS, AND TO PROVIDE GUIDANCE IN SAMPLING. DATA WERE OBTAINED FROM A SPECIALLY DESIGNED MEMBERSHIP CARD USED IN 1962. UNIT SIZE AVERAGED 23.6 MEMBERS BUT THE RANGE WAS FAIRLY GREAT. A NEED…

  2. The Brief Negative Symptom Scale (BNSS): Independent validation in a large sample of Italian patients with schizophrenia.

    Science.gov (United States)

    Mucci, A; Galderisi, S; Merlotti, E; Rossi, A; Rocca, P; Bucci, P; Piegari, G; Chieffi, M; Vignapiano, A; Maj, M

    2015-07-01

    The Brief Negative Symptom Scale (BNSS) was developed to address the main limitations of the existing scales for the assessment of negative symptoms of schizophrenia. The initial validation of the scale by the group involved in its development demonstrated good convergent and discriminant validity, and a factor structure confirming the two domains of negative symptoms (reduced emotional/verbal expression and anhedonia/asociality/avolition). However, only relatively small samples of patients with schizophrenia were investigated. Further independent validation in large clinical samples might be instrumental to the broad diffusion of the scale in clinical research. The present study aimed to examine the BNSS inter-rater reliability, convergent/discriminant validity and factor structure in a large Italian sample of outpatients with schizophrenia. Our results confirmed the excellent inter-rater reliability of the BNSS (the intraclass correlation coefficient ranged from 0.81 to 0.98 for individual items and was 0.98 for the total score). The convergent validity measures had r values from 0.62 to 0.77, while the divergent validity measures had r values from 0.20 to 0.28 in the main sample (n=912) and in a subsample without clinically significant levels of depression and extrapyramidal symptoms (n=496). The BNSS factor structure was supported in both groups. The study confirms that the BNSS is a promising measure for quantifying negative symptoms of schizophrenia in large multicenter clinical studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  3. "Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2011-09-01

    Full Text Available Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found not to have modeled the analyses to take account of the complex sample (Johnson & Elliott, 1998 even when publishing in highly-regarded journals. It is well known that failure to appropriately model the complex sample can substantially bias the results of the analysis. Examples presented in this paper highlight the risk of error of inference and mis-estimation of parameters from failure to analyze these data sets appropriately.

  4. Large-scale Samples Irradiation Facility at the IBR-2 Reactor in Dubna

    CERN Document Server

    Cheplakov, A P; Golubyh, S M; Kaskanov, G Ya; Kulagin, E N; Kukhtin, V V; Luschikov, V I; Shabalin, E P; León-Florián, E; Leroy, C

    1998-01-01

    The irradiation facility at the beam line no.3 of the IBR-2 reactor of the Frank Laboratory for Neutron Physics is described. The facility is aimed at irradiation studies of various objects with area up to 800 cm$^2$ both at cryogenic and ambient temperatures. The energy spectra of neutrons are reconstructed by the method of threshold detector activation. The neutron fluence and $\\gamma$ dose rates are measured by means of alanine and thermoluminescent dosimeters. The boron carbide and lead filters or $(n/\\gamma)$ converter provide beams of different ratio of doses induced by neutrons and photons. For the lead filter, the flux of fast neutrons with energy more than 0.1 MeV is $1.4 \\cdot 10^{10}$ \\fln and the neutron dose is about 96\\% of the total radiation dose. For the $(n/\\gamma)$ converter, the $\\gamma$ dose rate is $\\sim$500 Gy h$^{-1}$ which is about 85\\% of the total dose. The radiation hardness tests of GaAs electronics and materials for the ATLAS detector to be put into operation at the Large Hadron ...

  5. Predicting violence and recidivism in a large sample of males on probation or parole.

    Science.gov (United States)

    Prell, Lettie; Vitacco, Michael J; Zavodny, Denis

    This study evaluated the utility of items and scales from the Iowa Violence and Victimization Instrument in a sample of 1961 males from the state of Iowa who were on probation or released from prison to parole supervision. This is the first study to examine the potential of the Iowa Violence and Victimization Instrument to predict criminal offenses. The males were followed for 30months immediately following their admission to probation or parole. AUC analyses indicated fair to good predictive power for the Iowa Violence and Victimization Instrument for charges of violence and victimization, but chance predictive power for drug offenses. Notably, both scales of the instrument performed equally well at the 30-month follow-up. Items on the Iowa Violence and Victimization Instrument not only predicted violence, but are straightforward to score. Violence management strategies are discussed as they relate to the current findings, including the potential to expand the measure to other jurisdictions and populations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. An empirical investigation of incompleteness in a large clinical sample of obsessive compulsive disorder.

    Science.gov (United States)

    Sibrava, Nicholas J; Boisseau, Christina L; Eisen, Jane L; Mancebo, Maria C; Rasmussen, Steven A

    2016-08-01

    Obsessive Compulsive Disorder (OCD) is a disorder with heterogeneous clinical presentations. To advance our understanding of this heterogeneity we investigated the prevalence and clinical features associated with incompleteness (INC), a putative underlying core feature of OCD. We predicted INC would be prominent in individuals with OCD and associated with greater severity and impairment. We examined the impact of INC in 307 adults with primary OCD. Participants with clinically significant INC (22.8% of the sample) had significantly greater OCD severity, greater rates of comorbidity, poorer ratings of functioning, lower quality of life, and higher rates of unemployment and disability. Participants with clinically significant INC were also more likely to be diagnosed with OCPD and to endorse symmetry/exactness obsessions and ordering/arranging compulsions than those who reported low INC. Our findings provide evidence that INC is associated with greater severity, comorbidity, and impairment, highlighting the need for improved assessment and treatment of INC in OCD. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Analysis of Three Compounds in Flos Farfarae by Capillary Electrophoresis with Large-Volume Sample Stacking

    Directory of Open Access Journals (Sweden)

    Hai-xia Yu

    2017-01-01

    Full Text Available The aim of this study was to develop a method combining an online concentration and high-efficiency capillary electrophoresis separation to analyze and detect three compounds (rutin, hyperoside, and chlorogenic acid in Flos Farfarae. In order to get good resolution and enrichment, several parameters such as the choice of running buffer, pH and concentration of the running buffer, organic modifier, temperature, and separation voltage were all investigated. The optimized conditions were obtained as follows: the buffer of 40 mM NaH2P04-40 mM Borax-30% v/v methanol (pH 9.0; the sample hydrodynamic injection of up to 4 s at 0.5 psi; 20 kV applied voltage. The diode-array detector was used, and the detection wavelength was 364 nm. Based on peak area, higher levels of selective and sensitive improvements in analysis were observed and about 14-, 26-, and 5-fold enrichment of rutin, hyperoside, and chlorogenic acid were achieved, respectively. This method was successfully applied to determine the three compounds in Flos Farfarae. The linear curve of peak response versus concentration was from 20 to 400 µg/ml, 16.5 to 330 µg/mL, and 25 to 500 µg/mL, respectively. The regression coefficients were 0.9998, 0.9999, and 0.9991, respectively.

  8. Adverse Childhood Environment: Relationship With Sexual Risk Behaviors and Marital Status in a Large American Sample.

    Science.gov (United States)

    Anderson, Kermyt G

    2017-01-01

    A substantial theoretical and empirical literature suggests that stressful events in childhood influence the timing and patterning of subsequent sexual and reproductive behaviors. Stressful childhood environments have been predicted to produce a life history strategy in which adults are oriented more toward short-term mating behaviors and less toward behaviors consistent with longevity. This article tests the hypothesis that adverse childhood environment will predict adult outcomes in two areas: risky sexual behavior (engagement in sexual risk behavior or having taken an HIV test) and marital status (currently married vs. never married, divorced, or a member of an unmarried couple). Data come from the Behavioral Risk Factor Surveillance System. The sample contains 17,530 men and 23,978 women aged 18-54 years living in 13 U.S. states plus the District of Columbia. Adverse childhood environment is assessed through 11 retrospective measures of childhood environment, including having grown up with someone who was depressed or mentally ill, who was an alcoholic, who used or abused drugs, or who served time in prison; whether one's parents divorced in childhood; and two scales measuring childhood exposure to violence and to sexual trauma. The results indicate that adverse childhood environment is associated with increased likelihood of engaging in sexual risk behaviors or taking an HIV test, and increased likelihood of being in an unmarried couple or divorced/separated, for both men and women. The predictions are supported by the data, lending further support to the hypothesis that childhood environments influence adult reproductive strategy.

  9. A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample

    Directory of Open Access Journals (Sweden)

    Yu-Yen Chang

    2012-01-01

    concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L-band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color (g-i is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color (i-J shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.

  10. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  11. Does higher education hone cognitive functioning and learning efficacy? Findings from a large and diverse sample

    Science.gov (United States)

    Guerra-Carrillo, Belén; Katovich, Kiefer

    2017-01-01

    Attending school is a multifaceted experience. Students are not only exposed to new knowledge but are also immersed in a structured environment in which they need to respond flexibly in accordance with changing task goals, keep relevant information in mind, and constantly tackle novel problems. To quantify the cumulative effect of this experience, we examined retrospectively and prospectively, the relationships between educational attainment and both cognitive performance and learning. We analyzed data from 196,388 subscribers to an online cognitive training program. These subscribers, ages 15–60, had completed eight behavioral assessments of executive functioning and reasoning at least once. Controlling for multiple demographic and engagement variables, we found that higher levels of education predicted better performance across the full age range, and modulated performance in some cognitive domains more than others (e.g., reasoning vs. processing speed). Differences were moderate for Bachelor’s degree vs. High School (d = 0.51), and large between Ph.D. vs. Some High School (d = 0.80). Further, the ages of peak cognitive performance for each educational category closely followed the typical range of ages at graduation. This result is consistent with a cumulative effect of recent educational experiences, as well as a decrement in performance as completion of schooling becomes more distant. To begin to characterize the directionality of the relationship between educational attainment and cognitive performance, we conducted a prospective longitudinal analysis. For a subset of 69,202 subscribers who had completed 100 days of cognitive training, we tested whether the degree of novel learning was associated with their level of education. Higher educational attainment predicted bigger gains, but the differences were small (d = 0.04–0.37). Altogether, these results point to the long-lasting trace of an effect of prior cognitive challenges but suggest that new

  12. Does higher education hone cognitive functioning and learning efficacy? Findings from a large and diverse sample.

    Science.gov (United States)

    Guerra-Carrillo, Belén; Katovich, Kiefer; Bunge, Silvia A

    2017-01-01

    Attending school is a multifaceted experience. Students are not only exposed to new knowledge but are also immersed in a structured environment in which they need to respond flexibly in accordance with changing task goals, keep relevant information in mind, and constantly tackle novel problems. To quantify the cumulative effect of this experience, we examined retrospectively and prospectively, the relationships between educational attainment and both cognitive performance and learning. We analyzed data from 196,388 subscribers to an online cognitive training program. These subscribers, ages 15-60, had completed eight behavioral assessments of executive functioning and reasoning at least once. Controlling for multiple demographic and engagement variables, we found that higher levels of education predicted better performance across the full age range, and modulated performance in some cognitive domains more than others (e.g., reasoning vs. processing speed). Differences were moderate for Bachelor's degree vs. High School (d = 0.51), and large between Ph.D. vs. Some High School (d = 0.80). Further, the ages of peak cognitive performance for each educational category closely followed the typical range of ages at graduation. This result is consistent with a cumulative effect of recent educational experiences, as well as a decrement in performance as completion of schooling becomes more distant. To begin to characterize the directionality of the relationship between educational attainment and cognitive performance, we conducted a prospective longitudinal analysis. For a subset of 69,202 subscribers who had completed 100 days of cognitive training, we tested whether the degree of novel learning was associated with their level of education. Higher educational attainment predicted bigger gains, but the differences were small (d = 0.04-0.37). Altogether, these results point to the long-lasting trace of an effect of prior cognitive challenges but suggest that new learning

  13. Rayleigh- and Prandtl-number dependence of the large-scale flow-structure in weakly-rotating turbulent thermal convection

    Science.gov (United States)

    Weiss, Stephan; Wei, Ping; Ahlers, Guenter

    2015-11-01

    Turbulent thermal convection under rotation shows a remarkable variety of different flow states. The Nusselt number (Nu) at slow rotation rates (expressed as the dimensionless inverse Rossby number 1/Ro), for example, is not a monotonic function of 1/Ro. Different 1/Ro-ranges can be observed with different slopes ∂Nu / ∂ (1 / Ro) . Some of these ranges are connected by sharp transitions where ∂Nu / ∂ (1 / Ro) changes discontinuously. We investigate different regimes in cylindrical samples of aspect ratio Γ = 1 by measuring temperatures at the sidewall of the sample for various Prandtl numbers in the range 3 Deutsche Forschungsgemeinschaft.

  14. Racialized risk environments in a large sample of people who inject drugs in the United States.

    Science.gov (United States)

    Cooper, Hannah L F; Linton, Sabriya; Kelley, Mary E; Ross, Zev; Wolfe, Mary E; Chen, Yen-Tyng; Zlotorzynska, Maria; Hunter-Jones, Josalin; Friedman, Samuel R; Des Jarlais, Don; Semaan, Salaam; Tempalski, Barbara; DiNenno, Elizabeth; Broz, Dita; Wejnert, Cyprian; Paz-Bailey, Gabriela

    2016-01-01

    Substantial racial/ethnic disparities exist in HIV infection among people who inject drugs (PWID) in many countries. To strengthen efforts to understand the causes of disparities in HIV-related outcomes and eliminate them, we expand the "Risk Environment Model" to encompass the construct "racialized risk environments," and investigate whether PWID risk environments in the United States are racialized. Specifically, we investigate whether black and Latino PWID are more likely than white PWID to live in places that create vulnerability to adverse HIV-related outcomes. As part of the Centers for Disease Control and Prevention's National HIV Behavioral Surveillance, 9170 PWID were sampled from 19 metropolitan statistical areas (MSAs) in 2009. Self-reported data were used to ascertain PWID race/ethnicity. Using Census data and other administrative sources, we characterized features of PWID risk environments at four geographic scales (i.e., ZIP codes, counties, MSAs, and states). Means for each feature of the risk environment were computed for each racial/ethnic group of PWID, and were compared across racial/ethnic groups. Almost universally across measures, black PWID were more likely than white PWID to live in environments associated with vulnerability to adverse HIV-related outcomes. Compared to white PWID, black PWID lived in ZIP codes with higher poverty rates and worse spatial access to substance abuse treatment and in counties with higher violent crime rates. Black PWID were less likely to live in states with laws facilitating sterile syringe access (e.g., laws permitting over-the-counter syringe sales). Latino/white differences in risk environments emerged at the MSA level (e.g., Latino PWID lived in MSAs with higher drug-related arrest rates). PWID risk environments in the US are racialized. Future research should explore the implications of this racialization for racial/ethnic disparities in HIV-related outcomes, using appropriate methods. Copyright © 2015

  15. Core belief content examined in a large sample of patients using online cognitive behaviour therapy.

    Science.gov (United States)

    Millings, Abigail; Carnelley, Katherine B

    2015-11-01

    Computerised cognitive behavioural therapy provides a unique opportunity to collect and analyse data regarding the idiosyncratic content of people's core beliefs about the self, others and the world. 'Beating the Blues' users recorded a core belief derived through the downward arrow technique. Core beliefs from 1813 mental health patients were coded into 10 categories. The most common were global self-evaluation, attachment, and competence. Women were more likely, and men were less likely (than chance), to provide an attachment-related core belief; and men were more likely, and women less likely, to provide a self-competence-related core belief. This may be linked to gender differences in sources of self-esteem. Those who were suffering from anxiety were more likely to provide power- and control-themed core beliefs and less likely to provide attachment core beliefs than chance. Finally, those who had thoughts of suicide in the preceding week reported less competence themed core beliefs and more global self-evaluation (e.g., 'I am useless') core beliefs than chance. Concurrent symptom level was not available. The sample was not nationally representative, and featured programme completers only. Men and women may focus on different core beliefs in the context of CBT. Those suffering anxiety may need a therapeutic focus on power and control. A complete rejection of the self (not just within one domain, such as competence) may be linked to thoughts of suicide. Future research should examine how individual differences and symptom severity influence core beliefs. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Methodology for Quantitative Analysis of Large Liquid Samples with Prompt Gamma Neutron Activation Analysis using Am-Be Source

    International Nuclear Information System (INIS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.

    2009-01-01

    An optimized set-up for prompt gamma neutron activation analysis (PGNAA) with Am-Be source is described and used for large liquid samples analysis. A methodology for quantitative analysis is proposed: it consists on normalizing the prompt gamma count rates with thermal neutron flux measurements carried out with He-3 detector and gamma attenuation factors calculated using MCNP-5. The relative and absolute methods are considered. This methodology is then applied to the determination of cadmium in industrial phosphoric acid. The same sample is then analyzed by inductively coupled plasma (ICP) method. Our results are in good agreement with those obtained with ICP method.

  17. Experimental observation of pulsating instability under acoustic field in downward-propagating flames at large Lewis number

    KAUST Repository

    Yoon, Sung Hwan

    2017-10-12

    According to previous theory, pulsating propagation in a premixed flame only appears when the reduced Lewis number, β(Le-1), is larger than a critical value (Sivashinsky criterion: 4(1 +3) ≈ 11), where β represents the Zel\\'dovich number (for general premixed flames, β ≈ 10), which requires Lewis number Le > 2.1. However, few experimental observation have been reported because the critical reduced Lewis number for the onset of pulsating instability is beyond what can be reached in experiments. Furthermore, the coupling with the unavoidable hydrodynamic instability limits the observation of pure pulsating instabilities in flames. Here, we describe a novel method to observe the pulsating instability. We utilize a thermoacoustic field caused by interaction between heat release and acoustic pressure fluctuations of the downward-propagating premixed flames in a tube to enhance conductive heat loss at the tube wall and radiative heat loss at the open end of the tube due to extended flame residence time by diminished flame surface area, i.e., flat flame. The thermoacoustic field allowed pure observation of the pulsating motion since the primary acoustic force suppressed the intrinsic hydrodynamic instability resulting from thermal expansion. By employing this method, we have provided new experimental observations of the pulsating instability for premixed flames. The Lewis number (i.e., Le ≈ 1.86) was less than the critical value suggested previously.

  18. Prevalence of overweight and obesity in a large clinical sample of children with autism.

    Science.gov (United States)

    Broder-Fingert, Sarabeth; Brazauskas, Karissa; Lindgren, Kristen; Iannuzzi, Dorothea; Van Cleave, Jeanne

    2014-01-01

    Overweight and obesity are major pediatric public health problems in the United States; however, limited data exist on the prevalence and correlates of overnutrition in children with autism. Through a large integrated health care system's patient database, we identified 6672 children ages 2 to 20 years with an assigned ICD-9 code of autism (299.0), Asperger syndrome (299.8), and control subjects from 2008 to 2011 who had at least 1 weight and height recorded in the same visit. We calculated age-adjusted, sex-adjusted body mass index and classified children as overweight (body mass index 85th to 95th percentile) or obese (≥ 95th percentile). We used multinomial logistic regression to compare the odds of overweight and obesity between groups. We then used logistic regression to evaluate factors associated with overweight and obesity in children with autism, including demographic and clinical characteristics. Compared to control subjects, children with autism and Asperger syndrome had significantly higher odds of overweight (odds ratio, 95% confidence interval: autism 2.24, 1.74-2.88; Asperger syndrome 1.49, 1.12-1.97) and obesity (autism 4.83, 3.85-6.06; Asperger syndrome 5.69, 4.50-7.21). Among children with autism, we found a higher odds of obesity in older children (aged 12-15 years 1.87, 1.33-2.63; aged 16-20 years 1.94, 1.39-2.71) compared to children aged 6 to 11 years. We also found higher odds of overweight and obesity in those with public insurance (overweight 1.54, 1.25-1.89; obese 1.16, 1.02-1.40) and with co-occurring sleep disorder (obese 1.23, 1.00-1.53). Children with autism and Asperger syndrome had significantly higher odds of overweight and obesity than control subjects. Older age, public insurance, and co-occurring sleep disorder were associated with overweight or obesity in this population. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  19. The Pemberton Happiness Index: Validation of the Universal Portuguese version in a large Brazilian sample.

    Science.gov (United States)

    Paiva, Bianca Sakamoto Ribeiro; de Camargos, Mayara Goulart; Demarzo, Marcelo Marcos Piva; Hervás, Gonzalo; Vázquez, Carmelo; Paiva, Carlos Eduardo

    2016-09-01

    The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys.An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons' Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test-retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a "happy individual" was defined using receiver-operating characteristic (ROC) curve methodology.Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test-retest (intraclass correlation coefficient = 0.814) were both considered

  20. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Science.gov (United States)

    Dufour, Nicholas; Redcay, Elizabeth; Young, Liane; Mavros, Penelope L; Moran, Joseph M; Triantafyllou, Christina; Gabrieli, John D E; Saxe, Rebecca

    2013-01-01

    Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462) individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31), using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  1. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Directory of Open Access Journals (Sweden)

    Nicholas Dufour

    Full Text Available Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462 individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31, using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  2. Engineering task plan for upgrades to the leveling jacks on core sample trucks number 3 and 4; TOPICAL

    International Nuclear Information System (INIS)

    KOSTELNIK, A.J.

    1999-01-01

    Characterizing the waste in underground storage tanks at the Hanford Site is accomplished by obtaining a representative core sample for analysis. Core sampling is one of the numerous techniques that have been developed for use given the environmental and field conditions at the Hanford Site. Core sampling is currently accomplished using either Push Mode Core Sample Truck No.1 or; Rotary Mode Core Sample Trucks No.2, 3 or 4. Past analysis (WHC 1994) has indicated that the Core Sample Truck (CST) leveling jacks are structurally inadequate when lateral loads are applied. WHC 1994 identifies many areas where failure could occur. All these failures are based on exceeding the allowable stresses listed in the American Institute of Steel Construction (AISC) code. The mode of failure is for the outrigger attachments to the truck frame to fail resulting in dropping of the CST and possible overturning (Ref. Ziada and Hundal, 1996). Out of level deployment of the truck can exceed the code allowable stresses in the structure. Calculations have been performed to establish limits for maintaining the truck level when lifting. The calculations and the associated limits are included in appendix A. The need for future operations of the CSTS is limited. Sampling is expected to be complete in FY-2001. Since there is limited time at risk for continued use of the CSTS with the leveling controls without correcting the structural problems, there are several design changes that could give incremental improvements to the operational safety of the CSTS with limited impact on available operating time. The improvements focus on making the truck easier to control during lifting and leveling. Not all of the tasks identified in this ETP need to be performed. Each task alone can improve the safety. This engineering task plan is the management plan document for implementing the necessary additional structural analysis. Any additional changes to meet requirements of standing orders shall require a

  3. The Limits and Possibilities of International Large-Scale Assessments. Education Policy Brief. Volume 9, Number 2, Spring 2011

    Science.gov (United States)

    Rutkowski, David J.; Prusinski, Ellen L.

    2011-01-01

    The staff of the Center for Evaluation & Education Policy (CEEP) at Indiana University is often asked about how international large-scale assessments influence U.S. educational policy. This policy brief is designed to provide answers to some of the most frequently asked questions encountered by CEEP researchers concerning the three most popular…

  4. Investigation into impacts of large numbers of visitors on the collection environment at Our Lord in the Attic

    NARCIS (Netherlands)

    Maekawa, S.; Ankersmit, Bart; Neuhaus, E.; Schellen, H.L.; Beltran, V.; Boersma, F.; Padfield, T.; Borchersen, K.

    2007-01-01

    Our Lord in the Attic is a historic house museum located in the historic center of Amsterdam, The Netherlands. It is a typical 17th century Dutch canal house, with a hidden Church in the attic. The Church was used regularly until 1887 when the house became a museum. The annual total number of

  5. A Few Large Roads or Many Small Ones? How to Accommodate Growth in Vehicle Numbers to Minimise Impacts on Wildlife

    Science.gov (United States)

    Rhodes, Jonathan R.; Lunney, Daniel; Callaghan, John; McAlpine, Clive A.

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity. PMID:24646891

  6. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Directory of Open Access Journals (Sweden)

    Jonathan R Rhodes

    Full Text Available Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1 increasing the number of roads, and (2 increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  7. A few large roads or many small ones? How to accommodate growth in vehicle numbers to minimise impacts on wildlife.

    Science.gov (United States)

    Rhodes, Jonathan R; Lunney, Daniel; Callaghan, John; McAlpine, Clive A

    2014-01-01

    Roads and vehicular traffic are among the most pervasive of threats to biodiversity because they fragmenting habitat, increasing mortality and opening up new areas for the exploitation of natural resources. However, the number of vehicles on roads is increasing rapidly and this is likely to continue into the future, putting increased pressure on wildlife populations. Consequently, a major challenge is the planning of road networks to accommodate increased numbers of vehicles, while minimising impacts on wildlife. Nonetheless, we currently have few principles for guiding decisions on road network planning to reduce impacts on wildlife in real landscapes. We addressed this issue by developing an approach for quantifying the impact on wildlife mortality of two alternative mechanisms for accommodating growth in vehicle numbers: (1) increasing the number of roads, and (2) increasing traffic volumes on existing roads. We applied this approach to a koala (Phascolarctos cinereus) population in eastern Australia and quantified the relative impact of each strategy on mortality. We show that, in most cases, accommodating growth in traffic through increases in volumes on existing roads has a lower impact than building new roads. An exception is where the existing road network has very low road density, but very high traffic volumes on each road. These findings have important implications for how we design road networks to reduce their impacts on biodiversity.

  8. Small genomes and large seeds: chromosome numbers, genome size and seed mass in diploid Aesculus species (Sapindaceae)

    Czech Academy of Sciences Publication Activity Database

    Krahulcová, Anna; Trávníček, Pavel; Krahulec, František; Rejmánek, M.

    2017-01-01

    Roč. 119, č. 6 (2017), s. 957-964 ISSN 0305-7364 Institutional support: RVO:67985939 Keywords : Aesculus * chromosome number * genome size * phylogeny * seed mass Subject RIV: EF - Botanics OBOR OECD: Plant sciences, botany Impact factor: 4.041, year: 2016

  9. CORRELATION ANALYSIS OF A LARGE SAMPLE OF NARROW-LINE SEYFERT 1 GALAXIES: LINKING CENTRAL ENGINE AND HOST PROPERTIES

    International Nuclear Information System (INIS)

    Xu Dawei; Komossa, S.; Wang Jing; Yuan Weimin; Zhou Hongyan; Lu Honglin; Li Cheng; Grupe, Dirk

    2012-01-01

    We present a statistical study of a large, homogeneously analyzed sample of narrow-line Seyfert 1 (NLS1) galaxies, accompanied by a comparison sample of broad-line Seyfert 1 (BLS1) galaxies. Optical emission-line and continuum properties are subjected to correlation analyses, in order to identify the main drivers of the correlation space of active galactic nuclei (AGNs), and of NLS1 galaxies in particular. For the first time, we have established the density of the narrow-line region as a key parameter in Eigenvector 1 space, as important as the Eddington ratio L/L Edd . This is important because it links the properties of the central engine with the properties of the host galaxy, i.e., the interstellar medium (ISM). We also confirm previously found correlations involving the line width of Hβ and the strength of the Fe II and [O III] λ5007 emission lines, and we confirm the important role played by L/L Edd in driving the properties of NLS1 galaxies. A spatial correlation analysis shows that large-scale environments of the BLS1 and NLS1 galaxies of our sample are similar. If mergers are rare in our sample, accretion-driven winds, on the one hand, or bar-driven inflows, on the other hand, may account for the strong dependence of Eigenvector 1 on ISM density.

  10. The ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES) . I. Project description, survey sample, and quality assessment

    Science.gov (United States)

    Cox, Nick L. J.; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J.; Marshall, Charlotte C. M.; Smith, Keith T.; Evans, Christopher J.; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A.; Joblin, Christine; van Loon, Jacco Th.; Foing, Bernard H.; Bhatt, Neil H.; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G.; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco

    2017-10-01

    The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the C60^+ fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort tosystematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral-type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution (R 70 000-100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305-1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided.

  11. airGR: an R-package suitable for large sample hydrology presenting a suite of lumped hydrological models

    Science.gov (United States)

    Thirel, G.; Delaigue, O.; Coron, L.; Perrin, C.; Andreassian, V.

    2016-12-01

    large sample hydrology experiments.

  12. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  13. Robust BRCA1-like classification of copy number profiles of samples repeated across different datasets and platforms

    NARCIS (Netherlands)

    Schouten, P.C.; Grigoriadis, A.; Kuilman, T.; Mirza, H.; Watkins, J.A.; Cooke, S.A.; Dyk, E. van; Severson, T.M.; Rueda, O.M.; Hoogstraat, M.; Verhagen, C.V.M.; Natrajan, R.; Chin, S.F.; Lips, E.H.; Kruizinga, J.; Velds, A.; Nieuwland, M.; Kerkhoven, R.M.; Krijgsman, O.; Vens, C.; Peeper, D.; Nederlof, P.M.; Caldas, C.; Tutt, A.N.; Wessels, L.F.; Linn, S.C.

    2015-01-01

    Breast cancers with BRCA1 germline mutation have a characteristic DNA copy number (CN) pattern. We developed a test that assigns CN profiles to be 'BRCA1-like' or 'non-BRCA1-like', which refers to resembling a BRCA1-mutated tumor or resembling a tumor without a BRCA1 mutation, respectively.

  14. Robust BRCA1-like classification of copy number profiles of samples repeated across different datasets and platforms

    NARCIS (Netherlands)

    Schouten, P.C.; Grigoriadis, A.; Kuilman, T.; Mirza, H.; Watkins, J.A.; Cooke, S.A.; Van Dyk, E.; Severson, T.M.; Rueda, O.M.; Hoogstraat, M.; Verhagen, C.; Natrajan, R.; Chin, S.F.; Lips, E.H.; Kruizinga, J.; Velds, A.; Nieuwland, M.; Kerkhoven, R.M.; Krijgsman, O.; Vens, C.; Peeper, D.; Nederlof, P.M.; Caldas, C.; Tutt, A.N.; Wessels, L.F.A.; Linn, S.C.

    2015-01-01

    Breast cancers with BRCA1 germline mutation have a characteristic DNA copy number (CN) pattern. We developed a test that assigns CN profiles to be ‘BRCA1-like’ or ‘non-BRCA1-like’, which refers to resembling a BRCA1-mutated tumor or resembling a tumor without a BRCA1 mutation, respectively.

  15. Robust BRCA1-like classification of copy number profiles of samples repeated across different datasets and platforms

    NARCIS (Netherlands)

    Schouten, Philip C.; Grigoriadis, Anita; Kuilman, Thomas; Mirza, Hasan; Watkins, Johnathan A.; Cooke, Saskia A.; van Dyk, Ewald; Severson, Tesa M.; Rueda, Oscar M.; Hoogstraat, Marlous; Verhagen, Caroline V. M.; Natrajan, Rachael; Chin, Suet-Feung; Lips, Esther H.; Kruizinga, Janneke; Velds, Arno; Nieuwland, Marja; Kerkhoven, Ron M.; Krijgsman, Oscar; Vens, Conchita; Peeper, Daniel; Nederlof, Petra M.; Caldas, Carlos; Tutt, Andrew N.; Wessels, Lodewyk F.; Linn, Sabine C.

    Breast cancers with BRCA1 germline mutation have a characteristic DNA copy number (CN) pattern. We developed a test that assigns CN profiles to be 'BRCA1-like' or 'non-BRCA1-like', which refers to resembling a BRCA1-mutated tumor or resembling a tumor without a BRCA1 mutation, respectively.

  16. Association of variation in Fc gamma receptor 3B gene copy number with rheumatoid arthritis in Caucasian samples

    NARCIS (Netherlands)

    McKinney, Cushla; Fanciulli, Manuela; Merriman, Marilyn E.; Phipps-Green, Amanda; Alizadeh, Behrooz Z.; Koeleman, Bobby P. C.; Dalbeth, Nicola; Gow, Peter J.; Harrison, Andrew A.; Highton, John; Jones, Peter B.; Stamp, Lisa K.; Steer, Sophia; Barrera, Pilar; Coenen, Marieke J. H.; Franke, Barbara; van Riel, Piet L. C. M.; Vyse, Tim J.; Aitman, Tim J.; Radstake, Timothy R. D. J.; Merriman, Tony R.

    2010-01-01

    Objective There is increasing evidence that variation in gene copy number (CN) influences clinical phenotype. The low-affinity Fc gamma receptor 3B (FCGR3B) located in the FCGR gene cluster is a CN polymorphic gene involved in the recruitment to sites of inflammation and activation of

  17. Association of variation in Fcgamma receptor 3B gene copy number with rheumatoid arthritis in Caucasian samples.

    NARCIS (Netherlands)

    McKinney, C.; Fanciulli, M.; Merriman, M.E.; Phipps-Green, A.; Alizadeh, B.Z.; Koeleman, B.P.; Dalbeth, N.; Gow, P.J.; Harrison, A.A.; Highton, J.; Jones, P.B.; Stamp, L.K.; Steer, S.; Barrera, P.; Coenen, M.J.H.; Franke, B.; Riel, P.L.C.M. van; Vyse, T.J.; Aitman, T.J.; Radstake, T.R.D.J.; Merriman, T.R.

    2010-01-01

    OBJECTIVE: There is increasing evidence that variation in gene copy number (CN) influences clinical phenotype. The low-affinity Fcgamma receptor 3B (FCGR3B) located in the FCGR gene cluster is a CN polymorphic gene involved in the recruitment to sites of inflammation and activation of

  18. Sampling in schools and large institutional buildings: Implications for regulations, exposure and management of lead and copper.

    Science.gov (United States)

    Doré, Evelyne; Deshommes, Elise; Andrews, Robert C; Nour, Shokoufeh; Prévost, Michèle

    2018-04-21

    Legacy lead and copper components are ubiquitous in plumbing of large buildings including schools that serve children most vulnerable to lead exposure. Lead and copper samples must be collected after varying stagnation times and interpreted in reference to different thresholds. A total of 130 outlets (fountains, bathroom and kitchen taps) were sampled for dissolved and particulate lead as well as copper. Sampling was conducted at 8 schools and 3 institutional (non-residential) buildings served by municipal water of varying corrosivity, with and without corrosion control (CC), and without a lead service line. Samples included first draw following overnight stagnation (>8h), partial (30 s) and fully (5 min) flushed, and first draw after 30 min of stagnation. Total lead concentrations in first draw samples after overnight stagnation varied widely from 0.07 to 19.9 μg Pb/L (median: 1.7 μg Pb/L) for large buildings served with non-corrosive water. Higher concentrations were observed in schools with corrosive water without CC (0.9-201 μg Pb/L, median: 14.3 μg Pb/L), while levels in schools with CC ranged from 0.2 to 45.1 μg Pb/L (median: 2.1 μg Pb/L). Partial flushing (30 s) and full flushing (5 min) reduced concentrations by 88% and 92% respectively for corrosive waters without CC. Lead concentrations were 45% than values in 1st draw samples collected after overnight stagnation. Concentrations of particulate Pb varied widely (≥0.02-846 μg Pb/L) and was found to be the cause of very high total Pb concentrations in the 2% of samples exceeding 50 μg Pb/L. Pb levels across outlets within the same building varied widely (up to 1000X) especially in corrosive water (0.85-851 μg Pb/L after 30MS) confirming the need to sample at each outlet to identify high risk taps. Based on the much higher concentrations observed in first draw samples, even after a short stagnation, the first 250mL should be discarded unless no sources

  19. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  20. Precise large deviations of aggregate claims in a size-dependent renewal risk model with stopping time claim-number process

    Directory of Open Access Journals (Sweden)

    Shuo Zhang

    2017-04-01

    Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.

  1. Hungarian Marfan family with large FBN1 deletion calls attention to copy number variation detection in the current NGS era

    Science.gov (United States)

    Ágg, Bence; Meienberg, Janine; Kopps, Anna M.; Fattorini, Nathalie; Stengl, Roland; Daradics, Noémi; Pólos, Miklós; Bors, András; Radovits, Tamás; Merkely, Béla; De Backer, Julie; Szabolcs, Zoltán; Mátyás, Gábor

    2018-01-01

    Copy number variations (CNVs) comprise about 10% of reported disease-causing mutations in Mendelian disorders. Nevertheless, pathogenic CNVs may have been under-detected due to the lack or insufficient use of appropriate detection methods. In this report, on the example of the diagnostic odyssey of a patient with Marfan syndrome (MFS) harboring a hitherto unreported 32-kb FBN1 deletion, we highlight the need for and the feasibility of testing for CNVs (>1 kb) in Mendelian disorders in the current next-generation sequencing (NGS) era. PMID:29850152

  2. Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection at large Rayleigh numbers

    Science.gov (United States)

    Kozitskiy, Sergey

    2018-05-01

    Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.

  3. Cluster lot quality assurance sampling: effect of increasing the number of clusters on classification precision and operational feasibility.

    Science.gov (United States)

    Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W

    2014-11-01

    To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. Soil Characterization by Large Scale Sampling of Soil Mixed with Buried Construction Debris at a Former Uranium Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Nardi, A.J.; Lamantia, L.

    2009-01-01

    Recent soil excavation activities on a site identified the presence of buried uranium contaminated building construction debris. The site previously was the location of a low enriched uranium fuel fabrication facility. This resulted in the collection of excavated materials from the two locations where contaminated subsurface debris was identified. The excavated material was temporarily stored in two piles on the site until a determination could be made as to the appropriate disposition of the material. Characterization of the excavated material was undertaken in a manner that involved the collection of large scale samples of the excavated material in 1 cubic meter Super Sacks. Twenty bags were filled with excavated material that consisted of the mixture of both the construction debris and the associated soil. In order to obtain information on the level of activity associated with the construction debris, ten additional bags were filled with construction debris that had been separated, to the extent possible, from the associated soil. Radiological surveys were conducted of the resulting bags of collected materials and the soil associated with the waste mixture. The 30 large samples, collected as bags, were counted using an In-Situ Object Counting System (ISOCS) unit to determine the average concentration of U-235 present in each bag. The soil fraction was sampled by the collection of 40 samples of soil for analysis in an on-site laboratory. A fraction of these samples were also sent to an off-site laboratory for additional analysis. This project provided the necessary soil characterization information to allow consideration of alternate options for disposition of the material. The identified contaminant was verified to be low enriched uranium. Concentrations of uranium in the waste were found to be lower than the calculated site specific derived concentration guideline levels (DCGLs) but higher than the NRC's screening values. The methods and results are presented

  5. SyPRID sampler: A large-volume, high-resolution, autonomous, deep-ocean precision plankton sampling system

    Science.gov (United States)

    Billings, Andrew; Kaiser, Carl; Young, Craig M.; Hiebert, Laurel S.; Cole, Eli; Wagner, Jamie K. S.; Van Dover, Cindy Lee

    2017-03-01

    The current standard for large-volume (thousands of cubic meters) zooplankton sampling in the deep sea is the MOCNESS, a system of multiple opening-closing nets, typically lowered to within 50 m of the seabed and towed obliquely to the surface to obtain low-spatial-resolution samples that integrate across 10 s of meters of water depth. The SyPRID (Sentry Precision Robotic Impeller Driven) sampler is an innovative, deep-rated (6000 m) plankton sampler that partners with the Sentry Autonomous Underwater Vehicle (AUV) to obtain paired, large-volume plankton samples at specified depths and survey lines to within 1.5 m of the seabed and with simultaneous collection of sensor data. SyPRID uses a perforated Ultra-High-Molecular-Weight (UHMW) plastic tube to support a fine mesh net within an outer carbon composite tube (tube-within-a-tube design), with an axial flow pump located aft of the capture filter. The pump facilitates flow through the system and reduces or possibly eliminates the bow wave at the mouth opening. The cod end, a hollow truncated cone, is also made of UHMW plastic and includes a collection volume designed to provide an area where zooplankton can collect, out of the high flow region. SyPRID attaches as a saddle-pack to the Sentry vehicle. Sentry itself is configured with a flight control system that enables autonomous survey paths to low altitudes. In its verification deployment at the Blake Ridge Seep (2160 m) on the US Atlantic Margin, SyPRID was operated for 6 h at an altitude of 5 m. It recovered plankton samples, including delicate living larvae, from the near-bottom stratum that is seldom sampled by a typical MOCNESS tow. The prototype SyPRID and its next generations will enable studies of plankton or other particulate distributions associated with localized physico-chemical strata in the water column or above patchy habitats on the seafloor.

  6. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  7. An examination of the RCMAS-2 scores across gender, ethnic background, and age in a large Asian school sample.

    Science.gov (United States)

    Ang, Rebecca P; Lowe, Patricia A; Yusof, Noradlin

    2011-12-01

    The present study investigated the factor structure, reliability, convergent and discriminant validity, and U.S. norms of the Revised Children's Manifest Anxiety Scale, Second Edition (RCMAS-2; C. R. Reynolds & B. O. Richmond, 2008a) scores in a Singapore sample of 1,618 school-age children and adolescents. Although there were small statistically significant differences in the average RCMAS-2 T scores found across various demographic groupings, on the whole, the U.S. norms appear adequate for use in the Asian Singapore sample. Results from item bias analyses suggested that biased items detected had small effects and were counterbalanced across gender and ethnicity, and hence, their relative impact on test score variation appears to be minimal. Results of factor analyses on the RCMAS-2 scores supported the presence of a large general anxiety factor, the Total Anxiety factor, and the 5-factor structure found in U.S. samples was replicated. Both the large general anxiety factor and the 5-factor solution were invariant across gender and ethnic background. Internal consistency estimates ranged from adequate to good, and 2-week test-retest reliability estimates were comparable to previous studies. Evidence providing support for convergent and discriminant validity of the RCMAS-2 scores was also found. Taken together, findings provide additional cross-cultural evidence of the appropriateness and usefulness of the RCMAS-2 as a measure of anxiety in Asian Singaporean school-age children and adolescents.

  8. TO BE OR NOT TO BE: AN INFORMATIVE NON-SYMBOLIC NUMERICAL MAGNITUDE PROCESSING STUDY ABOUT SMALL VERSUS LARGE NUMBERS IN INFANTS

    Directory of Open Access Journals (Sweden)

    Annelies CEULEMANS

    2014-03-01

    Full Text Available Many studies tested the association between numerical magnitude processing and mathematical achievement with conflicting findings reported for individuals with mathematical learning disorders. Some of the inconsistencies might be explained by the number of non-symbolic stimuli or dot collections used in studies. It has been hypothesized that there is an object-file system for ‘small’ and an analogue magnitude system for ‘large’ numbers. This two-system account has been supported by the set size limit of the object-file system (three items. A boundary was defined, accordingly, categorizing numbers below four as ‘small’ and from four and above as ‘large’. However, data on ‘small’ number processing and on the ‘boundary’ between small and large numbers are missing. In this contribution we provide data from infants discriminating between the number sets 4 vs. 8 and 1 vs. 4, both containing the number four combined with a small and a large number respectively. Participants were 25 and 26 full term 9-month-olds for 4 vs. 8 and 1 vs. 4 respectively. The stimuli (dots were controlled for continuous variables. Eye-tracking was combined with the habituation paradigm. The results showed that the infants were successful in discriminating 1 from 4, but failed to discriminate 4 from 8 dots. This finding supports the assumption of the number four as a ‘small’ number and enlarges the object-file system’s limit. This study might help to explain inconsistencies in studies. Moreover, the information may be useful in answering parent’s questions about challenges that vulnerable children with number processing problems, such as children with mathematical learning disorders, might encounter. In addition, the study might give some information on the stimuli that can be used to effectively foster children’s magnitude processing skills.

  9. Development and application of a most probable number-PCR assay to quantify flagellate populations in soil samples

    DEFF Research Database (Denmark)

    Fredslund, Line; Ekelund, Flemming; Jacobsen, Carsten Suhr

    2001-01-01

    This paper reports on the first successful molecular detection and quantification of soil protozoa. Quantification of heterotrophic flagellates and naked amoebae in soil has traditionally relied on dilution culturing techniques, followed by most-probable-number (MPN) calculations. Such methods...... are biased by differences in the culturability of soil protozoa and are unable to quantify specific taxonomic groups, and the results are highly dependent on the choice of media and the skills of the microscopists. Successful detection of protozoa in soil by DNA techniques requires (i) the development...

  10. How to implement a quantum algorithm on a large number of qubits by controlling one central qubit

    Science.gov (United States)

    Zagoskin, Alexander; Ashhab, Sahel; Johansson, J. R.; Nori, Franco

    2010-03-01

    It is desirable to minimize the number of control parameters needed to perform a quantum algorithm. We show that, under certain conditions, an entire quantum algorithm can be efficiently implemented by controlling a single central qubit in a quantum computer. We also show that the different system parameters do not need to be designed accurately during fabrication. They can be determined through the response of the central qubit to external driving. Our proposal is well suited for hybrid architectures that combine microscopic and macroscopic qubits. More details can be found in: A.M. Zagoskin, S. Ashhab, J.R. Johansson, F. Nori, Quantum two-level systems in Josephson junctions as naturally formed qubits, Phys. Rev. Lett. 97, 077001 (2006); and S. Ashhab, J.R. Johansson, F. Nori, Rabi oscillations in a qubit coupled to a quantum two-level system, New J. Phys. 8, 103 (2006).

  11. Instability and associated roll structure of Marangoni convection in high Prandtl number liquid bridge with large aspect ratio

    Science.gov (United States)

    Yano, T.; Nishino, K.; Kawamura, H.; Ueno, I.; Matsumoto, S.

    2015-02-01

    This paper reports the experimental results on the instability and associated roll structures (RSs) of Marangoni convection in liquid bridges formed under the microgravity environment on the International Space Station. The geometry of interest is high aspect ratio (AR = height/diameter ≥ 1.0) liquid bridges of high Prandtl number fluids (Pr = 67 and 207) suspended between coaxial disks heated differentially. The unsteady flow field and associated RSs were revealed with the three-dimensional particle tracking velocimetry. It is found that the flow field after the onset of instability exhibits oscillations with azimuthal mode number m = 1 and associated RSs traveling in the axial direction. The RSs travel in the same direction as the surface flow (co-flow direction) for 1.00 ≤ AR ≤ 1.25 while they travel in the opposite direction (counter-flow direction) for AR ≥ 1.50, thus showing the change of traveling directions with AR. This traveling direction for AR ≥ 1.50 is reversed to the co-flow direction when the temperature difference between the disks is increased to the condition far beyond the critical one. This change of traveling directions is accompanied by the increase of the oscillation frequency. The characteristics of the RSs for AR ≥ 1.50, such as the azimuthal mode of oscillation, the dimensionless oscillation frequency, and the traveling direction, are in reasonable agreement with those of the previous sounding rocket experiment for AR = 2.50 and those of the linear stability analysis of an infinite liquid bridge.

  12. Waste isolation safety assessment program. Controlled sample program publication number 2: interlaboratory comparison of batch Kd values

    International Nuclear Information System (INIS)

    Relyea, J.F.; Serne, R.J.

    1979-06-01

    Objectives were to: (1) ascertain whether different experimenters obtain the same results for the adsorption of Cs, Sr and Pu using common rocks, standard solutions and a prescribed method; and (2) compare the results obtained by individual laboratories using different experimental methodologies and resolve any differences found or determine what conversions can be made to compare results from one method with another. Results from Objective 1 indicate that several parameters that were uncontrolled may have affected results. The uncontrolled parameters were: (1) method of tracer addition to solution, (2) solution to rock ratio, (3) initial tracer concentration in influent solution, (4) particle size distribution, (5) solid--solution separation method, (6) sample containers, and (7) temperature. Observed Kds for Cs and Sr in brine showed agreement among laboratories for both limestone and basalt rock samples. Comparable results were also found for Sr and Cs in the basalt groundwater. Results for Kd(Cs) in the limestone groundwater varied over three orders of magnitude, and Kd(Sr) varied by one order of magnitude in the limestone system. Observed Kd values for Pu typically varied by two to three orders of magnitude in all systems studied. Adsorption of Pu by container walls and by colloidal particles caused much of the variation in Kd(Pu). Direct measurement of Pu adsorbed by the rock (rather than measured by the difference between influent and effluent activities) also failed to reduce the Kd(Pu) variability

  13. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    International Nuclear Information System (INIS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-01-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)

  14. 99Mo Yield Using Large Sample Mass of MoO3 for Sustainable Production of 99Mo

    Science.gov (United States)

    Tsukada, Kazuaki; Nagai, Yasuki; Hashimoto, Kazuyuki; Kawabata, Masako; Minato, Futoshi; Saeki, Hideya; Motoishi, Shoji; Itoh, Masatoshi

    2018-04-01

    A neutron source from the C(d,n) reaction has the unique capability of producing medical radioisotopes such as 99Mo with a minimum level of radioactive waste. Precise data on the neutron flux are crucial to determine the best conditions for obtaining the maximum yield of 99Mo. The measured yield of 99Mo produced by the 100Mo(n,2n)99Mo reaction from a large sample mass of MoO3 agrees well with the numerical result estimated with the latest neutron data, which are a factor of two larger than the other existing data. This result establishes an important finding for the domestic production of 99Mo: approximately 50% of the demand for 99Mo in Japan could be met using a 100 g 100MoO3 sample mass with a single accelerator of 40 MeV, 2 mA deuteron beams.

  15. Neutron activation analysis of archaeological artifacts using the conventional relative method: a realistic approach for analysis of large samples

    International Nuclear Information System (INIS)

    Bedregal, P.S.; Mendoza, A.; Montoya, E.H.; Cohen, I.M.; Universidad Tecnologica Nacional, Buenos Aires; Oscar Baltuano

    2012-01-01

    A new approach for analysis of entire potsherds of archaeological interest by INAA, using the conventional relative method, is described. The analytical method proposed involves, primarily, the preparation of replicates of the original archaeological pottery, with well known chemical composition (standard), destined to be irradiated simultaneously, in a well thermalized external neutron beam of the RP-10 reactor, with the original object (sample). The basic advantage of this proposal is to avoid the need of performing complicated effect corrections when dealing with large samples, due to neutron self shielding, neutron self-thermalization and gamma ray attenuation. In addition, and in contrast with the other methods, the main advantages are the possibility of evaluating the uncertainty of the results and, fundamentally, validating the overall methodology. (author)

  16. A Large-Sample Test of a Semi-Automated Clavicle Search Engine to Assist Skeletal Identification by Radiograph Comparison.

    Science.gov (United States)

    D'Alonzo, Susan S; Guyomarc'h, Pierre; Byrd, John E; Stephan, Carl N

    2017-01-01

    In 2014, a morphometric capability to search chest radiograph databases by quantified clavicle shape was published to assist skeletal identification. Here, we extend the validation tests conducted by increasing the search universe 18-fold, from 409 to 7361 individuals to determine whether there is any associated decrease in performance under these more challenging circumstances. The number of trials and analysts were also increased, respectively, from 17 to 30 skeletons, and two to four examiners. Elliptical Fourier analysis was conducted on clavicles from each skeleton by each analyst (shadowgrams trimmed from scratch in every instance) and compared to the search universe. Correctly matching individuals were found in shortlists of 10% of the sample 70% of the time. This rate is similar to, although slightly lower than, rates previously found for much smaller samples (80%). Accuracy and reliability are thereby maintained, even when the comparison system is challenged by much larger search universes. © 2016 American Academy of Forensic Sciences.

  17. Oxalic acid as a liquid dosimeter for absorbed dose measurement in large-scale of sample solution

    International Nuclear Information System (INIS)

    Biramontri, S.; Dechburam, S.; Vitittheeranon, A.; Wanitsuksombut, W.; Thongmitr, W.

    1999-01-01

    This study shows the feasibility for, applying 2.5 mM aqueous oxalic acid solution using spectrophotometric analysis method for absorbed dose measurement from 1 to 10 kGy in a large-scale of sample solution. The optimum wavelength of 220 nm was selected. The stability of the response of the dosimeter over 25 days was better than 1 % for unirradiated and ± 2% for irradiated solution. The reproducibility in the same batch was within 1%. The variation of the dosimeter response between batches was also studied. (author)

  18. Increased body mass index predicts severity of asthma symptoms but not objective asthma traits in a large sample of asthmatics

    DEFF Research Database (Denmark)

    Bildstrup, Line; Backer, Vibeke; Thomsen, Simon Francis

    2015-01-01

    AIM: To examine the relationship between body mass index (BMI) and different indicators of asthma severity in a large community-based sample of Danish adolescents and adults. METHODS: A total of 1186 subjects, 14-44 years of age, who in a screening questionnaire had reported a history of airway...... symptoms suggestive of asthma and/or allergy, or who were taking any medication for these conditions were clinically examined. All participants were interviewed about respiratory symptoms and furthermore height and weight, skin test reactivity, lung function, and airway responsiveness were measured...

  19. Large-scale prospective T cell function assays in shipped, unfrozen blood samples: experiences from the multicenter TRIGR trial.

    Science.gov (United States)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J; Girgis, Rose; Palmer, Jerry P; Cuthbertson, David; Krischer, Jeffrey P; Dosch, Hans-Michael

    2014-02-01

    Broad consensus assigns T lymphocytes fundamental roles in inflammatory, infectious, and autoimmune diseases. However, clinical investigations have lacked fully characterized and validated procedures, equivalent to those of widely practiced biochemical tests with established clinical roles, for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities of the samples and the unstimulated cell counts in the viable samples. Also, subject age was significantly associated with the number of unstimulated cells and T cell proliferation to positive activators. Finally, we observed a pattern of statistically significant increases in T cell responses to tetanus toxin around the timing of infant vaccinations. This assay platform and shipping protocol satisfy the criteria for robust and reproducible long-term measurements of human T cell function, comparable to those of established blood biochemical tests. We present a stable technology for prospective disease-relevant T cell analysis in immunological diseases, vaccination medicine, and measurement of herd immunity.

  20. Solid-Phase Extraction and Large-Volume Sample Stacking-Capillary Electrophoresis for Determination of Tetracycline Residues in Milk

    Directory of Open Access Journals (Sweden)

    Gabriela Islas

    2018-01-01

    Full Text Available Solid-phase extraction in combination with large-volume sample stacking-capillary electrophoresis (SPE-LVSS-CE was applied to measure chlortetracycline, doxycycline, oxytetracycline, and tetracycline in milk samples. Under optimal conditions, the proposed method had a linear range of 29 to 200 µg·L−1, with limits of detection ranging from 18.6 to 23.8 µg·L−1 with inter- and intraday repeatabilities < 10% (as a relative standard deviation in all cases. The enrichment factors obtained were from 50.33 to 70.85 for all the TCs compared with a conventional capillary zone electrophoresis (CZE. This method is adequate to analyze tetracyclines below the most restrictive established maximum residue limits. The proposed method was employed in the analysis of 15 milk samples from different brands. Two of the tested samples were positive for the presence of oxytetracycline with concentrations of 95 and 126 µg·L−1. SPE-LVSS-CE is a robust, easy, and efficient strategy for online preconcentration of tetracycline residues in complex matrices.

  1. Characterizing the zenithal night sky brightness in large territories: how many samples per square kilometre are needed?

    Science.gov (United States)

    Bará, Salvador

    2018-01-01

    A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.

  2. Analysis of plant hormones by microemulsion electrokinetic capillary chromatography coupled with on-line large volume sample stacking.

    Science.gov (United States)

    Chen, Zongbao; Lin, Zian; Zhang, Lin; Cai, Yan; Zhang, Lan

    2012-04-07

    A novel method of microemulsion electrokinetic capillary chromatography (MEEKC) coupled with on-line large volume sample stacking was developed for the analysis of six plant hormones including indole-3-acetic acid, indole-3-butyric acid, indole-3-propionic acid, 1-naphthaleneacetic acid, abscisic acid and salicylic acid. Baseline separation of six plant hormones was achieved within 10 min by using the microemulsion background electrolyte containing a 97.2% (w/w) 10 mM borate buffer at pH 9.2, 1.0% (w/w) ethyl acetate as oil droplets, 0.6% (w/w) sodium dodecyl sulphate as surfactant and 1.2% (w/w) 1-butanol as cosurfactant. In addition, an on-line concentration method based on a large volume sample stacking technique and multiple wavelength detection was adopted for improving the detection sensitivity in order to determine trace level hormones in a real sample. The optimal method provided about 50-100 fold increase in detection sensitivity compared with a single MEEKC method, and the detection limits (S/N = 3) were between 0.005 and 0.02 μg mL(-1). The proposed method was simple, rapid and sensitive and could be applied to the determination of six plant hormones in spiked water samples, tobacco leaves and 1-naphthylacetic acid in leaf fertilizer. The recoveries ranged from 76.0% to 119.1%, and good reproducibilities were obtained with relative standard deviations (RSDs) less than 6.6%.

  3. A LARGE NUMBER OF z > 6 GALAXIES AROUND A QSO AT z = 6.43: EVIDENCE FOR A PROTOCLUSTER?

    International Nuclear Information System (INIS)

    Utsumi, Yousuke; Kashikawa, Nobunari; Miyazaki, Satoshi; Komiyama, Yutaka; Goto, Tomotsugu; Furusawa, Hisanori; Overzier, Roderik

    2010-01-01

    QSOs have been thought to be important for tracing highly biased regions in the early universe, from which the present-day massive galaxies and galaxy clusters formed. While overdensities of star-forming galaxies have been found around QSOs at 2 6 is less clear. Previous studies with the Hubble Space Telescope (HST) have reported the detection of small excesses of faint dropout galaxies in some QSO fields, but these surveys probed a relatively small region surrounding the QSOs. To overcome this problem, we have observed the most distant QSO at z = 6.4 using the large field of view of the Suprime-Cam (34' x 27'). Newly installed red-sensitive fully depleted CCDs allowed us to select Lyman break galaxies (LBGs) at z ∼ 6.4 more efficiently. We found seven LBGs in the QSO field, whereas only one exists in a comparison field. The significance of this apparent excess is difficult to quantify without spectroscopic confirmation and additional control fields. The Poisson probability to find seven objects when one expects four is ∼10%, while the probability to find seven objects in one field and only one in the other is less than 0.4%, suggesting that the QSO field is significantly overdense relative to the control field. These conclusions are supported by a comparison with a cosmological smoothed particle hydrodynamics simulation which includes the higher order clustering of galaxies. We find some evidence that the LBGs are distributed in a ring-like shape centered on the QSO with a radius of ∼3 Mpc. There are no candidate LBGs within 2 Mpc from the QSO, i.e., galaxies are clustered around the QSO but appear to avoid the very center. These results suggest that the QSO is embedded in an overdense region when defined on a sufficiently large scale (i.e., larger than an HST/ACS pointing). This suggests that the QSO was indeed born in a massive halo. The central deficit of galaxies may indicate that (1) the strong UV radiation from the QSO suppressed galaxy formation in

  4. A study of energy and effective atomic number dependence of the exposure build-up factors in biological samples

    International Nuclear Information System (INIS)

    Sidhu, G.S.; Singh, P.S.; Mudahar, G.S.

    2000-01-01

    A theoretical method is presented to determine the gamma-radiation build-up factors in various biological materials. The gamma energy range is 0.015-15.0 MeV, with penetration depths up to 40 mean free paths considered. The dependence of the exposure build-up factor on incident photon energy and the effective atomic number (Z eff ) has also been assessed. In a practical analysis of dose burden to gamma-irradiated biological materials, the sophistication of Monte Carlo computer techniques would be applied, with associated detailed modelling. However, a feature of the theoretical method presented is its ability to make the consequences of the physics of the scattering process in biological materials more transparent. In addition, it can be quickly employed to give a first-pass dose estimate prior to a more detailed computer study. (author)

  5. A large sample of Kohonen selected E+A (post-starburst) galaxies from the Sloan Digital Sky Survey

    Science.gov (United States)

    Meusinger, H.; Brünecke, J.; Schalldach, P.; in der Au, A.

    2017-01-01

    Context. The galaxy population in the contemporary Universe is characterised by a clear bimodality, blue galaxies with significant ongoing star formation and red galaxies with only a little. The migration between the blue and the red cloud of galaxies is an issue of active research. Post starburst (PSB) galaxies are thought to be observed in the short-lived transition phase. Aims: We aim to create a large sample of local PSB galaxies from the Sloan Digital Sky Survey (SDSS) to study their characteristic properties, particularly morphological features indicative of gravitational distortions and indications for active galactic nuclei (AGNs). Another aim is to present a tool set for an efficient search in a large database of SDSS spectra based on Kohonen self-organising maps (SOMs). Methods: We computed a huge Kohonen SOM for ∼106 spectra from SDSS data release 7. The SOM is made fully available, in combination with an interactive user interface, for the astronomical community. We selected a large sample of PSB galaxies taking advantage of the clustering behaviour of the SOM. The morphologies of both PSB galaxies and randomly selected galaxies from a comparison sample in SDSS Stripe 82 (S82) were inspected on deep co-added SDSS images to search for indications of gravitational distortions. We used the Portsmouth galaxy property computations to study the evolutionary stage of the PSB galaxies and archival multi-wavelength data to search for hidden AGNs. Results: We compiled a catalogue of 2665 PSB galaxies with redshifts z 3 Å and z cloud, in agreement with the idea that PSB galaxies represent the transitioning phase between actively and passively evolving galaxies. The relative frequency of distorted PSB galaxies is at least 57% for EW(Hδ) > 5 Å, significantly higher than in the comparison sample. The search for AGNs based on conventional selection criteria in the radio and MIR results in a low AGN fraction of ∼2-3%. We confirm an MIR excess in the mean SED of

  6. A dynamic response model for pressure sensors in continuum and high Knudsen number flows with large temperature gradients

    Science.gov (United States)

    Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.

    1996-01-01

    This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.

  7. Effect of the Hartmann number on phase separation controlled by magnetic field for binary mixture system with large component ratio

    Science.gov (United States)

    Heping, Wang; Xiaoguang, Li; Duyang, Zang; Rui, Hu; Xingguo, Geng

    2017-11-01

    This paper presents an exploration for phase separation in a magnetic field using a coupled lattice Boltzmann method (LBM) with magnetohydrodynamics (MHD). The left vertical wall was kept at a constant magnetic field. Simulations were conducted by the strong magnetic field to enhance phase separation and increase the size of separated phases. The focus was on the effect of magnetic intensity by defining the Hartmann number (Ha) on the phase separation properties. The numerical investigation was carried out for different governing parameters, namely Ha and the component ratio of the mixed liquid. The effective morphological evolutions of phase separation in different magnetic fields were demonstrated. The patterns showed that the slant elliptical phases were created by increasing Ha, due to the formation and increase of magnetic torque and force. The dataset was rearranged for growth kinetics of magnetic phase separation in a plot by spherically averaged structure factor and the ratio of separated phases and total system. The results indicate that the increase in Ha can increase the average size of separated phases and accelerate the spinodal decomposition and domain growth stages. Specially for the larger component ratio of mixed phases, the separation degree was also significantly improved by increasing magnetic intensity. These numerical results provide guidance for setting the optimum condition for the phase separation induced by magnetic field.

  8. Sample size and number of outcome measures of veterinary randomised controlled trials of pharmaceutical interventions funded by different sources, a cross-sectional study.

    Science.gov (United States)

    Wareham, K J; Hyde, R M; Grindlay, D; Brennan, M L; Dean, R S

    2017-10-04

    Randomised controlled trials (RCTs) are a key component of the veterinary evidence base. Sample sizes and defined outcome measures are crucial components of RCTs. To describe the sample size and number of outcome measures of veterinary RCTs either funded by the pharmaceutical industry or not, published in 2011. A structured search of PubMed identified RCTs examining the efficacy of pharmaceutical interventions. Number of outcome measures, number of animals enrolled per trial, whether a primary outcome was identified, and the presence of a sample size calculation were extracted from the RCTs. The source of funding was identified for each trial and groups compared on the above parameters. Literature searches returned 972 papers; 86 papers comprising 126 individual trials were analysed. The median number of outcomes per trial was 5.0; there were no significant differences across funding groups (p = 0.133). The median number of animals enrolled per trial was 30.0; this was similar across funding groups (p = 0.302). A primary outcome was identified in 40.5% of trials and was significantly more likely to be stated in trials funded by a pharmaceutical company. A very low percentage of trials reported a sample size calculation (14.3%). Failure to report primary outcomes, justify sample sizes and the reporting of multiple outcome measures was a common feature in all of the clinical trials examined in this study. It is possible some of these factors may be affected by the source of funding of the studies, but the influence of funding needs to be explored with a larger number of trials. Some veterinary RCTs provide a weak evidence base and targeted strategies are required to improve the quality of veterinary RCTs to ensure there is reliable evidence on which to base clinical decisions.

  9. Technology interactions among low-carbon energy technologies: What can we learn from a large number of scenarios?

    International Nuclear Information System (INIS)

    McJeon, Haewon C.; Clarke, Leon; Kyle, Page; Wise, Marshall; Hackbarth, Andrew; Bryant, Benjamin P.; Lempert, Robert J.

    2011-01-01

    Advanced low-carbon energy technologies can substantially reduce the cost of stabilizing atmospheric carbon dioxide concentrations. Understanding the interactions between these technologies and their impact on the costs of stabilization can help inform energy policy decisions. Many previous studies have addressed this challenge by exploring a small number of representative scenarios that represent particular combinations of future technology developments. This paper uses a combinatorial approach in which scenarios are created for all combinations of the technology development assumptions that underlie a smaller, representative set of scenarios. We estimate stabilization costs for 768 runs of the Global Change Assessment Model (GCAM), based on 384 different combinations of assumptions about the future performance of technologies and two stabilization goals. Graphical depiction of the distribution of stabilization costs provides first-order insights about the full data set and individual technologies. We apply a formal scenario discovery method to obtain more nuanced insights about the combinations of technology assumptions most strongly associated with high-cost outcomes. Many of the fundamental insights from traditional representative scenario analysis still hold under this comprehensive combinatorial analysis. For example, the importance of carbon capture and storage (CCS) and the substitution effect among supply technologies are consistently demonstrated. The results also provide more clarity regarding insights not easily demonstrated through representative scenario analysis. For example, they show more clearly how certain supply technologies can provide a hedge against high stabilization costs, and that aggregate end-use efficiency improvements deliver relatively consistent stabilization cost reductions. Furthermore, the results indicate that a lack of CCS options combined with lower technological advances in the buildings sector or the transportation sector is

  10. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  11. Lack of association between digit ratio (2D:4D) and assertiveness: replication in a large sample.

    Science.gov (United States)

    Voracek, Martin

    2009-12-01

    Findings regarding within-sex associations of digit ratio (2D:4D), a putative pointer to long-lasting effects of prenatal androgen action, and sexually differentiated personality traits have generally been inconsistent or unreplicable, suggesting that effects in this domain, if any, are likely small. In contrast to evidence from Wilson's important 1983 study, a forerunner of modern 2D:4D research, two recent studies in 2005 and 2008 by Freeman, et al. and Hampson, et al. showed assertiveness, a presumably male-typed personality trait, was not associated with 2D:4D; however, these studies were clearly statistically underpowered. Hence this study examined this question anew, based on a large sample of 491 men and 627 women. Assertiveness was only modestly sexually differentiated, favoring men, and a positive correlate of age and education and a negative correlate of weight and Body Mass Index among women, but not men. Replicating the two prior studies, 2D:4D was throughout unrelated to assertiveness scores. This null finding was preserved with controls for correlates of assertiveness, also in nonparametric analysis and with tests for curvilinear relations. Discussed are implications of this specific null finding, now replicated in a large sample, for studies of 2D:4D and personality in general and novel research approaches to proceed in this field.

  12. Diversity in the stellar velocity dispersion profiles of a large sample of brightest cluster galaxies z ≤ 0.3

    Science.gov (United States)

    Loubser, S. I.; Hoekstra, H.; Babul, A.; O'Sullivan, E.

    2018-06-01

    We analyse spatially resolved deep optical spectroscopy of brightestcluster galaxies (BCGs) located in 32 massive clusters with redshifts of 0.05 ≤ z ≤ 0.30 to investigate their velocity dispersion profiles. We compare these measurements to those of other massive early-type galaxies, as well as central group galaxies, where relevant. This unique, large sample extends to the most extreme of massive galaxies, spanning MK between -25.7 and -27.8 mag, and host cluster halo mass M500 up to 1.7 × 1015 M⊙. To compare the kinematic properties between brightest group and cluster members, we analyse similar spatially resolved long-slit spectroscopy for 23 nearby brightest group galaxies (BGGs) from the Complete Local-Volume Groups Sample. We find a surprisingly large variety in velocity dispersion slopes for BCGs, with a significantly larger fraction of positive slopes, unique compared to other (non-central) early-type galaxies as well as the majority of the brightest members of the groups. We find that the velocity dispersion slopes of the BCGs and BGGs correlate with the luminosity of the galaxies, and we quantify this correlation. It is not clear whether the full diversity in velocity dispersion slopes that we see is reproduced in simulations.

  13. Gasoline prices, gasoline consumption, and new-vehicle fuel economy: Evidence for a large sample of countries

    International Nuclear Information System (INIS)

    Burke, Paul J.; Nishitateno, Shuhei

    2013-01-01

    Countries differ considerably in terms of the price drivers pay for gasoline. This paper uses data for 132 countries for the period 1995–2008 to investigate the implications of these differences for the consumption of gasoline for road transport. To address the potential for simultaneity bias, we use both a country's oil reserves and the international crude oil price as instruments for a country's average gasoline pump price. We obtain estimates of the long-run price elasticity of gasoline demand of between − 0.2 and − 0.5. Using newly available data for a sub-sample of 43 countries, we also find that higher gasoline prices induce consumers to substitute to vehicles that are more fuel-efficient, with an estimated elasticity of + 0.2. Despite the small size of our elasticity estimates, there is considerable scope for low-price countries to achieve gasoline savings and vehicle fuel economy improvements via reducing gasoline subsidies and/or increasing gasoline taxes. - Highlights: ► We estimate the determinants of gasoline demand and new-vehicle fuel economy. ► Estimates are for a large sample of countries for the period 1995–2008. ► We instrument for gasoline prices using oil reserves and the world crude oil price. ► Gasoline demand and fuel economy are inelastic with respect to the gasoline price. ► Large energy efficiency gains are possible via higher gasoline prices

  14. The Effect of Unequal Samples, Heterogeneity of Covariance Matrices, and Number of Variables on Discriminant Analysis Classification Tables and Related Statistics.

    Science.gov (United States)

    Spearing, Debra; Woehlke, Paula

    To assess the effect on discriminant analysis in terms of correct classification into two groups, the following parameters were systematically altered using Monte Carlo techniques: sample sizes; proportions of one group to the other; number of independent variables; and covariance matrices. The pairing of the off diagonals (or covariances) with…

  15. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  16. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  17. High-throughput genotyping assay for the large-scale genetic characterization of Cryptosporidium parasites from human and bovine samples.

    Science.gov (United States)

    Abal-Fabeiro, J L; Maside, X; Llovo, J; Bello, X; Torres, M; Treviño, M; Moldes, L; Muñoz, A; Carracedo, A; Bartolomé, C

    2014-04-01

    The epidemiological study of human cryptosporidiosis requires the characterization of species and subtypes involved in human disease in large sample collections. Molecular genotyping is costly and time-consuming, making the implementation of low-cost, highly efficient technologies increasingly necessary. Here, we designed a protocol based on MALDI-TOF mass spectrometry for the high-throughput genotyping of a panel of 55 single nucleotide variants (SNVs) selected as markers for the identification of common gp60 subtypes of four Cryptosporidium species that infect humans. The method was applied to a panel of 608 human and 63 bovine isolates and the results were compared with control samples typed by Sanger sequencing. The method allowed the identification of species in 610 specimens (90·9%) and gp60 subtype in 605 (90·2%). It displayed excellent performance, with sensitivity and specificity values of 87·3 and 98·0%, respectively. Up to nine genotypes from four different Cryptosporidium species (C. hominis, C. parvum, C. meleagridis and C. felis) were detected in humans; the most common ones were C. hominis subtype Ib, and C. parvum IIa (61·3 and 28·3%, respectively). 96·5% of the bovine samples were typed as IIa. The method performs as well as the widely used Sanger sequencing and is more cost-effective and less time consuming.

  18. Sleep habits, insomnia, and daytime sleepiness in a large and healthy community-based sample of New Zealanders.

    Science.gov (United States)

    Wilsmore, Bradley R; Grunstein, Ronald R; Fransen, Marlene; Woodward, Mark; Norton, Robyn; Ameratunga, Shanthi

    2013-06-15

    To determine the relationship between sleep complaints, primary insomnia, excessive daytime sleepiness, and lifestyle factors in a large community-based sample. Cross-sectional study. Blood donor sites in New Zealand. 22,389 individuals aged 16-84 years volunteering to donate blood. N/A. A comprehensive self-administered questionnaire including personal demographics and validated questions assessing sleep disorders (snoring, apnea), sleep complaints (sleep quantity, sleep dissatisfaction), insomnia symptoms, excessive daytime sleepiness, mood, and lifestyle factors such as work patterns, smoking, alcohol, and illicit substance use. Additionally, direct measurements of height and weight were obtained. One in three participants report healthy sample) was associated with insomnia (odds ratio [OR] 1.75, 95% confidence interval [CI] 1.50 to 2.05), depression (OR 2.01, CI 1.74 to 2.32), and sleep disordered breathing (OR 1.92, CI 1.59 to 2.32). Long work hours, alcohol dependence, and rotating work shifts also increase the risk of daytime sleepiness. Even in this relatively young, healthy, non-clinical sample, sleep complaints and primary insomnia with subsequent excess daytime sleepiness were common. There were clear associations between many personal and lifestyle factors-such as depression, long work hours, alcohol dependence, and rotating shift work-and sleep problems or excessive daytime sleepiness.

  19. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints

    Science.gov (United States)

    2014-01-01

    Background Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The mechanisms underlying these non-recurrent copy number changes have not yet been fully elucidated. Results We analyze large NF1 deletions with non-recurrent breakpoints as a model to investigate the full spectrum of causative mechanisms, and observe that they are mediated by various DNA double strand break repair mechanisms, as well as aberrant replication. Further, two of the 17 NF1 deletions with non-recurrent breakpoints, identified in unrelated patients, occur in association with the concomitant insertion of SINE/variable number of tandem repeats/Alu (SVA) retrotransposons at the deletion breakpoints. The respective breakpoints are refractory to analysis by standard breakpoint-spanning PCRs and are only identified by means of optimized PCR protocols designed to amplify across GC-rich sequences. The SVA elements are integrated within SUZ12P intron 8 in both patients, and were mediated by target-primed reverse transcription of SVA mRNA intermediates derived from retrotranspositionally active source elements. Both SVA insertions occurred during early postzygotic development and are uniquely associated with large deletions of 1 Mb and 867 kb, respectively, at the insertion sites. Conclusions Since active SVA elements are abundant in the human genome and the retrotranspositional activity of many SVA source elements is high, SVA insertion-associated large genomic deletions encompassing many hundreds of kilobases could constitute a novel and as yet under-appreciated mechanism underlying large-scale copy number changes in the human genome. PMID:24958239

  20. Comparison of blood RNA isolation methods from samples stabilized in Tempus tubes and stored at a large human biobank.

    Science.gov (United States)

    Aarem, Jeanette; Brunborg, Gunnar; Aas, Kaja K; Harbak, Kari; Taipale, Miia M; Magnus, Per; Knudsen, Gun Peggy; Duale, Nur

    2016-09-01

    More than 50,000 adult and cord blood samples were collected in Tempus tubes and stored at the Norwegian Institute of Public Health Biobank for future use. In this study, we systematically evaluated and compared five blood-RNA isolation protocols: three blood-RNA isolation protocols optimized for simultaneous isolation of all blood-RNA species (MagMAX RNA Isolation Kit, both manual and semi-automated protocols; and Norgen Preserved Blood RNA kit I); and two protocols optimized for large RNAs only (Tempus Spin RNA, and Tempus 6-port isolation kit). We estimated the following parameters: RNA quality, RNA yield, processing time, cost per sample, and RNA transcript stability of six selected mRNAs and 13 miRNAs using real-time qPCR. Whole blood samples from adults (n = 59 tubes) and umbilical cord blood (n = 18 tubes) samples collected in Tempus tubes were analyzed. High-quality blood-RNAs with average RIN-values above seven were extracted using all five RNA isolation protocols. The transcript levels of the six selected genes showed minimal variation between the five protocols. Unexplained differences within the transcript levels of the 13 miRNA were observed; however, the 13 miRNAs had similar expression direction and they were within the same order of magnitude. Some differences in the RNA processing time and cost were noted. Sufficient amounts of high-quality RNA were obtained using all five protocols, and the Tempus blood RNA system therefore seems not to be dependent on one specific RNA isolation method.