WorldWideScience

Sample records for large unbiased sample

  1. Unbiased Sampling and Meshing of Isosurfaces

    KAUST Repository

    Yan, Dongming

    2014-05-07

    In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.

  2. Unbiased Sampling and Meshing of Isosurfaces

    KAUST Repository

    Yan, Dongming; Wallner, Johannes; Wonka, Peter

    2014-01-01

    In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.

  3. A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters

    NARCIS (Netherlands)

    Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd

    In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)

  4. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  5. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  6. Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey

    2005-01-01

    Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...

  7. A review of methods for sampling large airborne particles and associated radioactivity

    International Nuclear Information System (INIS)

    Garland, J.A.; Nicholson, K.W.

    1990-01-01

    Radioactive particles, tens of μm or more in diameter, are unlikely to be emitted directly from nuclear facilities with exhaust gas cleansing systems, but may arise in the case of an accident or where resuspension from contaminated surfaces is significant. Such particles may dominate deposition and, according to some workers, may contribute to inhalation doses. Quantitative sampling of large airborne particles is difficult because of their inertia and large sedimentation velocities. The literature describes conditions for unbiased sampling and the magnitude of sampling errors for idealised sampling inlets in steady winds. However, few air samplers for outdoor use have been assessed for adequacy of sampling. Many size selective sampling methods are found in the literature but few are suitable at the low concentrations that are often encountered in the environment. A number of approaches for unbiased sampling of large particles have been found in the literature. Some are identified as meriting further study, for application in the measurement of airborne radioactivity. (author)

  8. Large-scale Reconstructions and Independent, Unbiased Clustering Based on Morphological Metrics to Classify Neurons in Selective Populations.

    Science.gov (United States)

    Bragg, Elise M; Briggs, Farran

    2017-02-15

    This protocol outlines large-scale reconstructions of neurons combined with the use of independent and unbiased clustering analyses to create a comprehensive survey of the morphological characteristics observed among a selective neuronal population. Combination of these techniques constitutes a novel approach for the collection and analysis of neuroanatomical data. Together, these techniques enable large-scale, and therefore more comprehensive, sampling of selective neuronal populations and establish unbiased quantitative methods for describing morphologically unique neuronal classes within a population. The protocol outlines the use of modified rabies virus to selectively label neurons. G-deleted rabies virus acts like a retrograde tracer following stereotaxic injection into a target brain structure of interest and serves as a vehicle for the delivery and expression of EGFP in neurons. Large numbers of neurons are infected using this technique and express GFP throughout their dendrites, producing "Golgi-like" complete fills of individual neurons. Accordingly, the virus-mediated retrograde tracing method improves upon traditional dye-based retrograde tracing techniques by producing complete intracellular fills. Individual well-isolated neurons spanning all regions of the brain area under study are selected for reconstruction in order to obtain a representative sample of neurons. The protocol outlines procedures to reconstruct cell bodies and complete dendritic arborization patterns of labeled neurons spanning multiple tissue sections. Morphological data, including positions of each neuron within the brain structure, are extracted for further analysis. Standard programming functions were utilized to perform independent cluster analyses and cluster evaluations based on morphological metrics. To verify the utility of these analyses, statistical evaluation of a cluster analysis performed on 160 neurons reconstructed in the thalamic reticular nucleus of the thalamus

  9. Verifying mixing in dilution tunnels How to ensure cookstove emissions samples are unbiased

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Daniel L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rapp, Vi H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Caubel, Julien J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Sharon S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gadgil, Ashok J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-12-15

    A well-mixed diluted sample is essential for unbiased measurement of cookstove emissions. Most cookstove testing labs employ a dilution tunnel, also referred to as a “duct,” to mix clean dilution air with cookstove emissions before sampling. It is important that the emissions be well-mixed and unbiased at the sampling port so that instruments can take representative samples of the emission plume. Some groups have employed mixing baffles to ensure the gaseous and aerosol emissions from cookstoves are well-mixed before reaching the sampling location [2, 4]. The goal of these baffles is to to dilute and mix the emissions stream with the room air entering the fume hood by creating a local zone of high turbulence. However, potential drawbacks of mixing baffles include increased flow resistance (larger blowers needed for the same exhaust flow), nuisance cleaning of baffles as soot collects, and, importantly, the potential for loss of PM2.5 particles on the baffles themselves, thus biasing results. A cookstove emission monitoring system with baffles will collect particles faster than the duct’s walls alone. This is mostly driven by the available surface area for deposition by processes of Brownian diffusion (through the boundary layer) and turbophoresis (i.e. impaction). The greater the surface area available for diffusive and advection-driven deposition to occur, the greater the particle loss will be at the sampling port. As a layer of larger particle “fuzz” builds on the mixing baffles, even greater PM2.5 loss could occur. The micro structure of the deposited aerosol will lead to increased rates of particle loss by interception and a tendency for smaller particles to deposit due to impaction on small features of the micro structure. If the flow stream could be well-mixed without the need for baffles, these drawbacks could be avoided and the cookstove emissions sampling system would be more robust.

  10. Markovian description of unbiased polymer translocation

    International Nuclear Information System (INIS)

    Mondaini, Felipe; Moriconi, L.

    2012-01-01

    We perform, with the help of cloud computing resources, extensive Langevin simulations which provide compelling evidence in favor of a general Markovian framework for unbiased three-dimensional polymer translocation. Our statistical analysis consists of careful evaluations of (i) two-point correlation functions of the translocation coordinate and (ii) the empirical probabilities of complete polymer translocation (taken as a function of the initial number of monomers on a given side of the membrane). We find good agreement with predictions derived from the Markov chain approach recently addressed in the literature by the present authors. -- Highlights: ► We investigate unbiased polymer translocation through membrane pores. ► Large statistical ensembles have been produced with the help of cloud computing resources. ► We evaluate the two-point correlation function of the translocation coordinate. ► We evaluate empirical probabilities for complete polymer translocation. ► Unbiased polymer translocation is described as a Markov stochastic process.

  11. Markovian description of unbiased polymer translocation

    Energy Technology Data Exchange (ETDEWEB)

    Mondaini, Felipe [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21945-970 Rio de Janeiro, RJ (Brazil); Centro Federal de Educação Tecnológica Celso Suckow da Fonseca, UnED Angra dos Reis, Angra dos Reis, 23953-030, RJ (Brazil); Moriconi, L., E-mail: moriconi@if.ufrj.br [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21945-970 Rio de Janeiro, RJ (Brazil)

    2012-10-01

    We perform, with the help of cloud computing resources, extensive Langevin simulations which provide compelling evidence in favor of a general Markovian framework for unbiased three-dimensional polymer translocation. Our statistical analysis consists of careful evaluations of (i) two-point correlation functions of the translocation coordinate and (ii) the empirical probabilities of complete polymer translocation (taken as a function of the initial number of monomers on a given side of the membrane). We find good agreement with predictions derived from the Markov chain approach recently addressed in the literature by the present authors. -- Highlights: ► We investigate unbiased polymer translocation through membrane pores. ► Large statistical ensembles have been produced with the help of cloud computing resources. ► We evaluate the two-point correlation function of the translocation coordinate. ► We evaluate empirical probabilities for complete polymer translocation. ► Unbiased polymer translocation is described as a Markov stochastic process.

  12. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    Science.gov (United States)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  13. Interplanetary scintillation observations of an unbiased sample of 90 Ooty occultation radio sources at 326.5 MHz

    International Nuclear Information System (INIS)

    Banhatti, D.G.; Ananthakrishnan, S.

    1989-01-01

    We present 327-MHz interplanetary scintillation (IPS) observations of an unbiased sample of 90 extragalactic radio sources selected from the ninth Ooty lunar occultation list. The sources are brighter than 0.75 Jy at 327 MHz and lie outside the galactic plane. We derive values, the fraction of scintillating flux density, and the equivalent Gaussian diameter for the scintillating structure. Various correlations are found between the observed parameters. In particular, the scintillating component weakens and broadens with increasing largest angular size, and stronger scintillators have more compact scintillating components. (author)

  14. Entanglement in mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Wiesniak, M; Zeilinger, A [Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna (Austria); Paterek, T, E-mail: tomasz.paterek@nus.edu.sg [Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore (Singapore)

    2011-05-15

    One of the essential features of quantum mechanics is that most pairs of observables cannot be measured simultaneously. This phenomenon manifests itself most strongly when observables are related to mutually unbiased bases. In this paper, we shed some light on the connection between mutually unbiased bases and another essential feature of quantum mechanics, quantum entanglement. It is shown that a complete set of mutually unbiased bases of a bipartite system contains a fixed amount of entanglement, independent of the choice of the set. This has implications for entanglement distribution among the states of a complete set. In prime-squared dimensions we present an explicit experiment-friendly construction of a complete set with a particularly simple entanglement distribution. Finally, we describe the basic properties of mutually unbiased bases composed of product states only. The constructions are illustrated with explicit examples in low dimensions. We believe that the properties of entanglement in mutually unbiased bases may be one of the ingredients to be taken into account to settle the question of the existence of complete sets. We also expect that they will be relevant to applications of bases in the experimental realization of quantum protocols in higher-dimensional Hilbert spaces.

  15. Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M

    2013-02-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  17. Unbiased tensor-based morphometry: Improved robustness and sample size estimates for Alzheimer’s disease clinical trials

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.

    2013-01-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970

  18. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Circulating tumor cell detection: A direct comparison between negative and unbiased enrichment in lung cancer.

    Science.gov (United States)

    Xu, Yan; Liu, Biao; Ding, Fengan; Zhou, Xiaodie; Tu, Pin; Yu, Bo; He, Yan; Huang, Peilin

    2017-06-01

    Circulating tumor cells (CTCs), isolated as a 'liquid biopsy', may provide important diagnostic and prognostic information. Therefore, rapid, reliable and unbiased detection of CTCs are required for routine clinical analyses. It was demonstrated that negative enrichment, an epithelial marker-independent technique for isolating CTCs, exhibits a better efficiency in the detection of CTCs compared with positive enrichment techniques that only use specific anti-epithelial cell adhesion molecules. However, negative enrichment techniques incur significant cell loss during the isolation procedure, and as it is a method that uses only one type of antibody, it is inherently biased. The detection procedure and identification of cell types also relies on skilled and experienced technicians. In the present study, the detection sensitivity of using negative enrichment and a previously described unbiased detection method was compared. The results revealed that unbiased detection methods may efficiently detect >90% of cancer cells in blood samples containing CTCs. By contrast, only 40-60% of CTCs were detected by negative enrichment. Additionally, CTCs were identified in >65% of patients with stage I/II lung cancer. This simple yet efficient approach may achieve a high level of sensitivity. It demonstrates a potential for the large-scale clinical implementation of CTC-based diagnostic and prognostic strategies.

  20. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...

  1. Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms

    International Nuclear Information System (INIS)

    Shangguan Danhua; Bao Jingdong

    2010-01-01

    We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.

  2. Application of Singh et al., unbiased estimator in a dual to ratio-cum ...

    African Journals Online (AJOL)

    This paper applied an unbiased estimator in a dual to ratio–cum-product estimator in sample surveys to double sampling design. Its efficiency over the conventional biased double sampling design estimator was determined based on the conditions attached to its supremacy. Three different data sets were used to testify to ...

  3. Mutually unbiased bases and semi-definite programming

    Energy Technology Data Exchange (ETDEWEB)

    Brierley, Stephen; Weigert, Stefan, E-mail: steve.brierley@ulb.ac.be, E-mail: stefan.weigert@york.ac.uk

    2010-11-01

    A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Groebner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.

  4. Mutually unbiased bases and semi-definite programming

    International Nuclear Information System (INIS)

    Brierley, Stephen; Weigert, Stefan

    2010-01-01

    A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Groebner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.

  5. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  6. Mutually unbiased bases

    Indian Academy of Sciences (India)

    Mutually unbiased bases play an important role in quantum cryptography [2] and in the optimal determination of the density operator of an ensemble [3,4]. A density operator ρ in N-dimensions depends on N2 1 real quantities. With the help of MUB's, any such density operator can be encoded, in an optimal way, in terms of ...

  7. UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY

    Directory of Open Access Journals (Sweden)

    Jean-Paul Jernot

    2011-05-01

    Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.

  8. Quantum process reconstruction based on mutually unbiased basis

    International Nuclear Information System (INIS)

    Fernandez-Perez, A.; Saavedra, C.; Klimov, A. B.

    2011-01-01

    We study a quantum process reconstruction based on the use of mutually unbiased projectors (MUB projectors) as input states for a D-dimensional quantum system, with D being a power of a prime number. This approach connects the results of quantum-state tomography using mutually unbiased bases with the coefficients of a quantum process, expanded in terms of MUB projectors. We also study the performance of the reconstruction scheme against random errors when measuring probabilities at the MUB projectors.

  9. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  10. Experimental studies of unbiased gluon jets from $e^{+}e^{-}$ annihilations using the jet boost algorithm

    CERN Document Server

    Abbiendi, G.; Akesson, P.F.; Alexander, G.; Allison, John; Amaral, P.; Anagnostou, G.; Anderson, K.J.; Arcelli, S.; Asai, S.; Axen, D.; Azuelos, G.; Bailey, I.; Barberio, E.; Barlow, R.J.; Batley, R.J.; Bechtle, P.; Behnke, T.; Bell, Kenneth Watson; Bell, P.J.; Bella, G.; Bellerive, A.; Benelli, G.; Bethke, S.; Biebel, O.; Boeriu, O.; Bock, P.; Boutemeur, M.; Braibant, S.; Brigliadori, L.; Brown, Robert M.; Buesser, K.; Burckhart, H.J.; Campana, S.; Carnegie, R.K.; Caron, B.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Csilling, A.; Cuffiani, M.; Dado, S.; De Roeck, A.; De Wolf, E.A.; Desch, K.; Dienes, B.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Etzion, E.; Fabbri, F.; Feld, L.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Furtjes, A.; Gagnon, P.; Gary, John William; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Giunta, Marina; Goldberg, J.; Gross, E.; Grunhaus, J.; Gruwe, M.; Gunther, P.O.; Gupta, A.; Hajdu, C.; Hamann, M.; Hanson, G.G.; Harder, K.; Harel, A.; Harin-Dirac, M.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Hensel, C.; Herten, G.; Heuer, R.D.; Hill, J.C.; Hoffman, Kara Dion; Horvath, D.; Igo-Kemenes, P.; Ishii, K.; Jeremie, H.; Jovanovic, P.; Junk, T.R.; Kanaya, N.; Kanzaki, J.; Karapetian, G.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klein, K.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Komamiya, S.; Kormos, Laura L.; Kramer, T.; Krieger, P.; von Krogh, J.; Kruger, K.; Kuhl, T.; Kupper, M.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Layter, J.G.; Leins, A.; Lellouch, D.; Letts, J.; Levinson, L.; Lillich, J.; Lloyd, S.L.; Loebinger, F.K.; Lu, J.; Ludwig, J.; Macpherson, A.; Mader, W.; Marcellini, S.; Martin, A.J.; Masetti, G.; Mashimo, T.; Mattig, Peter; McDonald, W.J.; McKenna, J.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menges, W.; Merritt, F.S.; Mes, H.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Moed, S.; Mohr, W.; Mori, T.; Mutter, A.; Nagai, K.; Nakamura, I.; Nanjo, H.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oh, A.; Okpara, A.; Oreglia, M.J.; Orito, S.; Pahl, C.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poli, B.; Polok, J.; Pooth, O.; Przybycien, M.; Quadt, A.; Rabbertz, K.; Rembser, C.; Renkel, P.; Rick, H.; Roney, J.M.; Rosati, S.; Rozen, Y.; Runge, K.; Sachs, K.; Saeki, T.; Sarkisyan, E.K.G.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schoerner-Sadenius, Thomas; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Sherwood, P.; Siroli, G.; Skuja, A.; Smith, A.M.; Sobie, R.; Soldner-Rembold, S.; Spano, F.; Stahl, A.; Stephens, K.; Strom, David M.; Strohmer, R.; Tarem, S.; Tasevsky, M.; Taylor, R.J.; Teuscher, R.; Thomson, M.A.; Torrence, E.; Toya, D.; Tran, P.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turner-Watson, M.F.; Ueda, I.; Ujvari, B.; Vollmer, C.F.; Vannerem, P.; Vertesi, R.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Waller, D.; Ward, C.P.; Ward, D.R.; Warsinsky, M.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wengler, T.; Wermes, N.; Wetterling, D.; Wilson, G.W.; Wilson, J.A.; Wolf, G.; Wyatt, T.R.; Yamashita, S.; Zer-Zion, D.; Zivkovic, Lidija

    2004-01-01

    We present the first experimental results based on the jet boost algorithm, a technique to select unbiased samples of gluon jets in e+e- annihilations, i.e. gluon jets free of biases introduced by event selection or jet finding criteria. Our results are derived from hadronic Z0 decays observed with the OPAL detector at the LEP e+e- collider at CERN. First, we test the boost algorithm through studies with Herwig Monte Carlo events and find that it provides accurate measurements of the charged particle multiplicity distributions of unbiased gluon jets for jet energies larger than about 5 GeV, and of the jet particle energy spectra (fragmentation functions) for jet energies larger than about 14 GeV. Second, we apply the boost algorithm to our data to derive unbiased measurements of the gluon jet multiplicity distribution for energies between about 5 and 18 GeV, and of the gluon jet fragmentation function at 14 and 18 GeV. In conjunction with our earlier results at 40 GeV, we then test QCD calculations for the en...

  11. Revisiting AFLP fingerprinting for an unbiased assessment of genetic structure and differentiation of taurine and zebu cattle

    Science.gov (United States)

    2014-01-01

    Background Descendants from the extinct aurochs (Bos primigenius), taurine (Bos taurus) and zebu cattle (Bos indicus) were domesticated 10,000 years ago in Southwestern and Southern Asia, respectively, and colonized the world undergoing complex events of admixture and selection. Molecular data, in particular genome-wide single nucleotide polymorphism (SNP) markers, can complement historic and archaeological records to elucidate these past events. However, SNP ascertainment in cattle has been optimized for taurine breeds, imposing limitations to the study of diversity in zebu cattle. As amplified fragment length polymorphism (AFLP) markers are discovered and genotyped as the samples are assayed, this type of marker is free of ascertainment bias. In order to obtain unbiased assessments of genetic differentiation and structure in taurine and zebu cattle, we analyzed a dataset of 135 AFLP markers in 1,593 samples from 13 zebu and 58 taurine breeds, representing nine continental areas. Results We found a geographical pattern of expected heterozygosity in European taurine breeds decreasing with the distance from the domestication centre, arguing against a large-scale introgression from European or African aurochs. Zebu cattle were found to be at least as diverse as taurine cattle. Western African zebu cattle were found to have diverged more from Indian zebu than South American zebu. Model-based clustering and ancestry informative markers analyses suggested that this is due to taurine introgression. Although a large part of South American zebu cattle also descend from taurine cows, we did not detect significant levels of taurine ancestry in these breeds, probably because of systematic backcrossing with zebu bulls. Furthermore, limited zebu introgression was found in Podolian taurine breeds in Italy. Conclusions The assessment of cattle diversity reported here contributes an unbiased global view to genetic differentiation and structure of taurine and zebu cattle

  12. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  13. A novel SNP analysis method to detect copy number alterations with an unbiased reference signal directly from tumor samples

    Directory of Open Access Journals (Sweden)

    LaFramboise William A

    2011-01-01

    Full Text Available Abstract Background Genomic instability in cancer leads to abnormal genome copy number alterations (CNA as a mechanism underlying tumorigenesis. Using microarrays and other technologies, tumor CNA are detected by comparing tumor sample CN to normal reference sample CN. While advances in microarray technology have improved detection of copy number alterations, the increase in the number of measured signals, noise from array probes, variations in signal-to-noise ratio across batches and disparity across laboratories leads to significant limitations for the accurate identification of CNA regions when comparing tumor and normal samples. Methods To address these limitations, we designed a novel "Virtual Normal" algorithm (VN, which allowed for construction of an unbiased reference signal directly from test samples within an experiment using any publicly available normal reference set as a baseline thus eliminating the need for an in-lab normal reference set. Results The algorithm was tested using an optimal, paired tumor/normal data set as well as previously uncharacterized pediatric malignant gliomas for which a normal reference set was not available. Using Affymetrix 250K Sty microarrays, we demonstrated improved signal-to-noise ratio and detected significant copy number alterations using the VN algorithm that were validated by independent PCR analysis of the target CNA regions. Conclusions We developed and validated an algorithm to provide a virtual normal reference signal directly from tumor samples and minimize noise in the derivation of the raw CN signal. The algorithm reduces the variability of assays performed across different reagent and array batches, methods of sample preservation, multiple personnel, and among different laboratories. This approach may be valuable when matched normal samples are unavailable or the paired normal specimens have been subjected to variations in methods of preservation.

  14. Unbiased diffusion of Brownian particles on disordered correlated potentials

    International Nuclear Information System (INIS)

    Salgado-Garcia, Raúl; Maldonado, Cesar

    2015-01-01

    In this work we study the diffusion of non-interacting overdamped particles, moving on unbiased disordered correlated potentials, subjected to Gaussian white noise. We obtain an exact expression for the diffusion coefficient which allows us to prove that the unbiased diffusion of overdamped particles on a random polymer does not depend on the correlations of the disordered potentials. This universal behavior of the unbiased diffusivity is a direct consequence of the validity of the Einstein relation and the decay of correlations of the random polymer. We test the independence on correlations of the diffusion coefficient for correlated polymers produced by two different stochastic processes, a one-step Markov chain and the expansion-modification system. Within the accuracy of our simulations, we found that the numerically obtained diffusion coefficient for these systems agree with the analytically calculated ones, confirming our predictions. (paper)

  15. Within-subject template estimation for unbiased longitudinal image analysis.

    Science.gov (United States)

    Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce

    2012-07-16

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. An unbiased stereological method for efficiently quantifying the innervation of the heart and other organs based on total length estimations

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela

    2010-01-01

    Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...

  17. Unbiased Strain-Typing of Arbovirus Directly from Mosquitoes Using Nanopore Sequencing: A Field-forward Biosurveillance Protocol.

    Science.gov (United States)

    Russell, Joseph A; Campos, Brittany; Stone, Jennifer; Blosser, Erik M; Burkett-Cadena, Nathan; Jacobs, Jonathan L

    2018-04-03

    The future of infectious disease surveillance and outbreak response is trending towards smaller hand-held solutions for point-of-need pathogen detection. Here, samples of Culex cedecei mosquitoes collected in Southern Florida, USA were tested for Venezuelan Equine Encephalitis Virus (VEEV), a previously-weaponized arthropod-borne RNA-virus capable of causing acute and fatal encephalitis in animal and human hosts. A single 20-mosquito pool tested positive for VEEV by quantitative reverse transcription polymerase chain reaction (RT-qPCR) on the Biomeme two3. The virus-positive sample was subjected to unbiased metatranscriptome sequencing on the Oxford Nanopore MinION and shown to contain Everglades Virus (EVEV), an alphavirus in the VEEV serocomplex. Our results demonstrate, for the first time, the use of unbiased sequence-based detection and subtyping of a high-consequence biothreat pathogen directly from an environmental sample using field-forward protocols. The development and validation of methods designed for field-based diagnostic metagenomics and pathogen discovery, such as those suitable for use in mobile "pocket laboratories", will address a growing demand for public health teams to carry out their mission where it is most urgent: at the point-of-need.

  18. Mutually Unbiased Maximally Entangled Bases for the Bipartite System Cd⊗ C^{dk}

    Science.gov (United States)

    Nan, Hua; Tao, Yuan-Hong; Wang, Tian-Jiao; Zhang, Jun

    2016-10-01

    The construction of maximally entangled bases for the bipartite system Cd⊗ Cd is discussed firstly, and some mutually unbiased bases with maximally entangled bases are given, where 2≤ d≤5. Moreover, we study a systematic way of constructing mutually unbiased maximally entangled bases for the bipartite system Cd⊗ C^{dk}.

  19. Efficient Unbiased Rendering using Enlightened Local Path Sampling

    DEFF Research Database (Denmark)

    Kristensen, Anders Wang

    measurements, which are the solution to the adjoint light transport problem. The second is a representation of the distribution of radiance and importance in the scene. We also derive a new method of particle sampling, which is advantageous compared to existing methods. Together we call the resulting algorithm....... The downside to using these algorithms is that they can be slow to converge. Due to the nature of Monte Carlo methods, the results are random variables subject to variance. This manifests itself as noise in the images, which can only be reduced by generating more samples. The reason these methods are slow...... is because of a lack of eeffective methods of importance sampling. Most global illumination algorithms are based on local path sampling, which is essentially a recipe for constructing random walks. Using this procedure paths are built based on information given explicitly as part of scene description...

  20. Higher-dimensional orbital-angular-momentum-based quantum key distribution with mutually unbiased bases

    CSIR Research Space (South Africa)

    Mafu, M

    2013-09-01

    Full Text Available We present an experimental study of higher-dimensional quantum key distribution protocols based on mutually unbiased bases, implemented by means of photons carrying orbital angular momentum. We perform (d + 1) mutually unbiased measurements in a...

  1. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  2. The ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES) . I. Project description, survey sample, and quality assessment

    Science.gov (United States)

    Cox, Nick L. J.; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J.; Marshall, Charlotte C. M.; Smith, Keith T.; Evans, Christopher J.; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A.; Joblin, Christine; van Loon, Jacco Th.; Foing, Bernard H.; Bhatt, Neil H.; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G.; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco

    2017-10-01

    The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the C60^+ fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort tosystematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral-type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution (R 70 000-100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305-1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided.

  3. Unbiased stereologic techniques for practical use in diagnostic histopathology

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1995-01-01

    Grading of malignancy by the examination of morphologic and cytologic details in histologic sections from malignant neoplasms is based exclusively on qualitative features, associated with significant subjectivity, and thus rather poor reproducibility. The traditional way of malignancy grading may...... by introducing quantitative techniques in the histopathologic discipline of malignancy grading. Unbiased stereologic methods, especially based on measurements of nuclear three-dimensional mean size, have during the last decade proved their value in this regard. In this survey, the methods are reviewed regarding...... the basic technique involved, sampling, efficiency, and reproducibility. Various types of cancers, where stereologic grading of malignancy has been used, are reviewed and discussed with regard to the development of a new objective and reproducible basis for carrying out prognosis-related malignancy grading...

  4. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  5. High levels of absorption in orientation-unbiased, radio-selected 3CR Active Galaxies

    Science.gov (United States)

    Wilkes, Belinda J.; Haas, Martin; Barthel, Peter; Leipski, Christian; Kuraszkiewicz, Joanna; Worrall, Diana; Birkinshaw, Mark; Willner, Steven P.

    2014-08-01

    A critical problem in understanding active galaxies (AGN) is the separation of intrinsic physical differences from observed differences that are due to orientation. Obscuration of the active nucleus is anisotropic and strongly frequency dependent leading to complex selection effects for observations in most wavebands. These can only be quantified using a sample that is sufficiently unbiased to test orientation effects. Low-frequency radio emission is one way to select a close-to orientation-unbiased sample, albeit limited to the minority of AGN with strong radio emission.Recent Chandra, Spitzer and Herschel observations combined with multi-wavelength data for a complete sample of high-redshift (1half the sample is significantly obscured with ratios of unobscured: Compton thin (22 24.2) = 2.5:1.4:1 in these high-luminosity (log L(0.3-8keV) ~ 44-46) sources. These ratios are consistent with current expectations based on modelingthe Cosmic X-ray Background. A strong correlation with radio orientation constrains the geometry of the obscuring disk/torus to have a ~60 degree opening angle and ~12 degree Compton-thick cross-section. The deduced ~50% obscured fraction of the population contrasts with typical estimates of ~20% obscured in optically- and X-ray-selected high-luminosity samples. Once the primary nuclear emission is obscured, AGN X-ray spectra are frequently dominated by unobscured non-nuclear or scattered nuclear emission which cannot be distinguished from direct nuclear emission with a lower obscuration level unless high quality data is available. As a result, both the level of obscuration and the estimated instrinsic luminosities of highly-obscured AGN are likely to be significantly (*10-1000) underestimated for 25-50% of the population. This may explain the lower obscured fractions reported for optical and X-ray samples which have no independent measure of the AGN luminosity. Correcting AGN samples for these underestimated luminosities would result in

  6. Unbiased stereologic techniques for practical use in diagnostic histopathology

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1995-01-01

    by introducing quantitative techniques in the histopathologic discipline of malignancy grading. Unbiased stereologic methods, especially based on measurements of nuclear three-dimensional mean size, have during the last decade proved their value in this regard. In this survey, the methods are reviewed regarding......Grading of malignancy by the examination of morphologic and cytologic details in histologic sections from malignant neoplasms is based exclusively on qualitative features, associated with significant subjectivity, and thus rather poor reproducibility. The traditional way of malignancy grading may...... of solid tumors. This new, unbiased attitude to malignancy grading is associated with excellent virtues, which ultimately may help the clinician in the choice of optimal treatment of the individual patient suffering from cancer. Stereologic methods are not solely applicable to the field of malignancy...

  7. Hierarchical thinking in network biology: the unbiased modularization of biochemical networks.

    Science.gov (United States)

    Papin, Jason A; Reed, Jennifer L; Palsson, Bernhard O

    2004-12-01

    As reconstructed biochemical reaction networks continue to grow in size and scope, there is a growing need to describe the functional modules within them. Such modules facilitate the study of biological processes by deconstructing complex biological networks into conceptually simple entities. The definition of network modules is often based on intuitive reasoning. As an alternative, methods are being developed for defining biochemical network modules in an unbiased fashion. These unbiased network modules are mathematically derived from the structure of the whole network under consideration.

  8. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  9. The New Peabody Picture Vocabulary Test-III: An Illusion of Unbiased Assessment?

    Science.gov (United States)

    Stockman, Ida J

    2000-10-01

    This article examines whether changes in the ethnic minority composition of the standardization sample for the latest edition of the Peabody Picture Vocabulary Test (PPVT-III, Dunn & Dunn, 1997) can be used as the sole explanation for children's better test scores when compared to an earlier edition, the Peabody Picture Vocabulary Test-Revised (PPVT-R, Dunn & Dunn, 1981). Results from a comparative analysis of these two test editions suggest that other factors may explain improved performances. Among these factors are the number of words and age levels sampled, the types of words and pictures used, and characteristics of the standardization sample other than its ethnic minority composition. This analysis also raises questions regarding the usefulness of converting scores from one edition to the other and the type of criteria that could be used to evaluate whether the PPVT-III is an unbiased test of vocabulary for children from diverse cultural and linguistic backgrounds.

  10. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  11. About mutually unbiased bases in even and odd prime power dimensions

    Science.gov (United States)

    Durt, Thomas

    2005-06-01

    Mutually unbiased bases generalize the X, Y and Z qubit bases. They possess numerous applications in quantum information science. It is well known that in prime power dimensions N = pm (with p prime and m a positive integer), there exists a maximal set of N + 1 mutually unbiased bases. In the present paper, we derive an explicit expression for those bases, in terms of the (operations of the) associated finite field (Galois division ring) of N elements. This expression is shown to be equivalent to the expressions previously obtained by Ivanovic (1981 J. Phys. A: Math. Gen. 14 3241) in odd prime dimensions, and Wootters and Fields (1989 Ann. Phys. 191 363) in odd prime power dimensions. In even prime power dimensions, we derive a new explicit expression for the mutually unbiased bases. The new ingredients of our approach are, basically, the following: we provide a simple expression of the generalized Pauli group in terms of the additive characters of the field, and we derive an exact groupal composition law between the elements of the commuting subsets of the generalized Pauli group, renormalized by a well-chosen phase-factor.

  12. About mutually unbiased bases in even and odd prime power dimensions

    International Nuclear Information System (INIS)

    Durt, Thomas

    2005-01-01

    Mutually unbiased bases generalize the X, Y and Z qubit bases. They possess numerous applications in quantum information science. It is well known that in prime power dimensions N = p m (with p prime and m a positive integer), there exists a maximal set of N + 1 mutually unbiased bases. In the present paper, we derive an explicit expression for those bases, in terms of the (operations of the) associated finite field (Galois division ring) of N elements. This expression is shown to be equivalent to the expressions previously obtained by Ivanovic (1981 J. Phys. A: Math. Gen. 14 3241) in odd prime dimensions, and Wootters and Fields (1989 Ann. Phys. 191 363) in odd prime power dimensions. In even prime power dimensions, we derive a new explicit expression for the mutually unbiased bases. The new ingredients of our approach are, basically, the following: we provide a simple expression of the generalized Pauli group in terms of the additive characters of the field, and we derive an exact groupal composition law between the elements of the commuting subsets of the generalized Pauli group, renormalized by a well-chosen phase-factor

  13. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  14. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  15. Practical characterization of large networks using neighborhood information

    KAUST Repository

    Wang, Pinghui

    2018-02-14

    Characterizing large complex networks such as online social networks through node querying is a challenging task. Network service providers often impose severe constraints on the query rate, hence limiting the sample size to a small fraction of the total network of interest. Various ad hoc subgraph sampling methods have been proposed, but many of them give biased estimates and no theoretical basis on the accuracy. In this work, we focus on developing sampling methods for large networks where querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web APIs available on most major large networks for graph sampling. We show that our sampling method has provable convergence guarantees on being an unbiased estimator, and it is more accurate than state-of-the-art methods. We also explore methods to uncover shortest paths between a subset of nodes and detect high degree nodes by sampling only a small fraction of the network of interest. Our results demonstrate that utilizing neighborhood information yields methods that are two orders of magnitude faster than state-of-the-art methods.

  16. Estimating Unbiased Land Cover Change Areas In The Colombian Amazon Using Landsat Time Series And Statistical Inference Methods

    Science.gov (United States)

    Arevalo, P. A.; Olofsson, P.; Woodcock, C. E.

    2017-12-01

    Unbiased estimation of the areas of conversion between land categories ("activity data") and their uncertainty is crucial for providing more robust calculations of carbon emissions to the atmosphere, as well as their removals. This is particularly important for the REDD+ mechanism of UNFCCC where an economic compensation is tied to the magnitude and direction of such fluxes. Dense time series of Landsat data and statistical protocols are becoming an integral part of forest monitoring efforts, but there are relatively few studies in the tropics focused on using these methods to advance operational MRV systems (Monitoring, Reporting and Verification). We present the results of a prototype methodology for continuous monitoring and unbiased estimation of activity data that is compliant with the IPCC Approach 3 for representation of land. We used a break detection algorithm (Continuous Change Detection and Classification, CCDC) to fit pixel-level temporal segments to time series of Landsat data in the Colombian Amazon. The segments were classified using a Random Forest classifier to obtain annual maps of land categories between 2001 and 2016. Using these maps, a biannual stratified sampling approach was implemented and unbiased stratified estimators constructed to calculate area estimates with confidence intervals for each of the stable and change classes. Our results provide evidence of a decrease in primary forest as a result of conversion to pastures, as well as increase in secondary forest as pastures are abandoned and the forest allowed to regenerate. Estimating areas of other land transitions proved challenging because of their very small mapped areas compared to stable classes like forest, which corresponds to almost 90% of the study area. Implications on remote sensing data processing, sample allocation and uncertainty reduction are also discussed.

  17. Losing the rose tinted glasses: neural substrates of unbiased belief updating in depression

    Directory of Open Access Journals (Sweden)

    Neil eGarrett

    2014-08-01

    Full Text Available Recent evidence suggests that a state of good mental health is associated with biased processing of information that supports a positively skewed view of the future. Depression, on the other hand, is associated with unbiased processing of such information. Here, we use brain imaging in conjunction with a belief update task administered to clinically depressed patients and healthy controls to characterize brain activity that supports unbiased belief updating in clinically depressed individuals. Our results reveal that unbiased belief updating in depression is mediated by strong neural coding of estimation errors in response to both good news (in left inferior frontal gyrus and bilateral superior frontal gyrus and bad news (in right inferior parietal lobule and right inferior frontal gyrus regarding the future. In contrast, intact mental health was linked to a relatively attenuated neural coding of bad news about the future. These findings identify a neural substrate mediating the breakdown of biased updating in Major Depression Disorder, which may be essential for mental health.

  18. Unbiased contaminant removal for 3D galaxy power spectrum measurements

    Science.gov (United States)

    Kalus, B.; Percival, W. J.; Bacon, D. J.; Samushia, L.

    2016-11-01

    We assess and develop techniques to remove contaminants when calculating the 3D galaxy power spectrum. We separate the process into three separate stages: (I) removing the contaminant signal, (II) estimating the uncontaminated cosmological power spectrum and (III) debiasing the resulting estimates. For (I), we show that removing the best-fitting contaminant (mode subtraction) and setting the contaminated components of the covariance to be infinite (mode deprojection) are mathematically equivalent. For (II), performing a quadratic maximum likelihood (QML) estimate after mode deprojection gives an optimal unbiased solution, although it requires the manipulation of large N_mode^2 matrices (Nmode being the total number of modes), which is unfeasible for recent 3D galaxy surveys. Measuring a binned average of the modes for (II) as proposed by Feldman, Kaiser & Peacock (FKP) is faster and simpler, but is sub-optimal and gives rise to a biased solution. We present a method to debias the resulting FKP measurements that does not require any large matrix calculations. We argue that the sub-optimality of the FKP estimator compared with the QML estimator, caused by contaminants, is less severe than that commonly ignored due to the survey window.

  19. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  20. Direct metagenomic detection of viral pathogens in nasal and fecal specimens using an unbiased high-throughput sequencing approach.

    Directory of Open Access Journals (Sweden)

    Shota Nakamura

    Full Text Available With the severe acute respiratory syndrome epidemic of 2003 and renewed attention on avian influenza viral pandemics, new surveillance systems are needed for the earlier detection of emerging infectious diseases. We applied a "next-generation" parallel sequencing platform for viral detection in nasopharyngeal and fecal samples collected during seasonal influenza virus (Flu infections and norovirus outbreaks from 2005 to 2007 in Osaka, Japan. Random RT-PCR was performed to amplify RNA extracted from 0.1-0.25 ml of nasopharyngeal aspirates (N = 3 and fecal specimens (N = 5, and more than 10 microg of cDNA was synthesized. Unbiased high-throughput sequencing of these 8 samples yielded 15,298-32,335 (average 24,738 reads in a single 7.5 h run. In nasopharyngeal samples, although whole genome analysis was not available because the majority (>90% of reads were host genome-derived, 20-460 Flu-reads were detected, which was sufficient for subtype identification. In fecal samples, bacteria and host cells were removed by centrifugation, resulting in gain of 484-15,260 reads of norovirus sequence (78-98% of the whole genome was covered, except for one specimen that was under-detectable by RT-PCR. These results suggest that our unbiased high-throughput sequencing approach is useful for directly detecting pathogenic viruses without advance genetic information. Although its cost and technological availability make it unlikely that this system will very soon be the diagnostic standard worldwide, this system could be useful for the earlier discovery of novel emerging viruses and bioterrorism, which are difficult to detect with conventional procedures.

  1. Design unbiased estimation in line intersect sampling using segmented transects

    Science.gov (United States)

    David L.R. Affleck; Timothy G. Gregoire; Harry T. Valentine; Harry T. Valentine

    2005-01-01

    In many applications of line intersect sampling. transects consist of multiple, connected segments in a prescribed configuration. The relationship between the transect configuration and the selection probability of a population element is illustrated and a consistent sampling protocol, applicable to populations composed of arbitrarily shaped elements, is proposed. It...

  2. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    Science.gov (United States)

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  3. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    International Nuclear Information System (INIS)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2

  4. Large deviations in the presence of cooperativity and slow dynamics

    Science.gov (United States)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  5. Biased and unbiased perceptual decision-making on vocal emotions.

    Science.gov (United States)

    Dricu, Mihai; Ceravolo, Leonardo; Grandjean, Didier; Frühholz, Sascha

    2017-11-24

    Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.

  6. Quantum circuit implementation of cyclic mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Seyfarth, Ulrich; Dittmann, Niklas; Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany)

    2013-07-01

    Complete sets of mutually unbiased bases (MUBs) play an important role in the areas of quantum state tomography and quantum cryptography. Sets which can be generated cyclically may eliminate certain side-channel attacks. To profit from the advantages of these MUBs we propose a method for deriving a quantum circuit that implements the generator of a set into an experimental setup. For some dimensions this circuit is minimal. The presented method is in principle applicable for a larger set of operations and generalizes recently published results.

  7. Characteristic properties of Fibonacci-based mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Seyfarth, Ulrich; Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany); Ranade, Kedar [Institut fuer Quantenphysik, Universitaet Ulm, Albert-Einstein-Allee 11, 89069 Ulm (Germany)

    2012-07-01

    Complete sets of mutually unbiased bases (MUBs) offer interesting applications in quantum information processing ranging from quantum cryptography to quantum state tomography. Different construction schemes provide different perspectives on these bases which are typically also deeply connected to various mathematical research areas. In this talk we discuss characteristic properties resulting from a recently established connection between construction methods for cyclic MUBs and Fibonacci polynomials. As a remarkable fact this connection leads to construction methods which do not involve any relations to mathematical properties of finite fields.

  8. Unbiased classification of spatial strategies in the Barnes maze.

    Science.gov (United States)

    Illouz, Tomer; Madar, Ravit; Clague, Charlotte; Griffioen, Kathleen J; Louzoun, Yoram; Okun, Eitan

    2016-11-01

    Spatial learning is one of the most widely studied cognitive domains in neuroscience. The Morris water maze and the Barnes maze are the most commonly used techniques to assess spatial learning and memory in rodents. Despite the fact that these tasks are well-validated paradigms for testing spatial learning abilities, manual categorization of performance into behavioral strategies is subject to individual interpretation, and thus to bias. We have previously described an unbiased machine-learning algorithm to classify spatial strategies in the Morris water maze. Here, we offer a support vector machine-based, automated, Barnes-maze unbiased strategy (BUNS) classification algorithm, as well as a cognitive score scale that can be used for memory acquisition, reversal training and probe trials. The BUNS algorithm can greatly benefit Barnes maze users as it provides a standardized method of strategy classification and cognitive scoring scale, which cannot be derived from typical Barnes maze data analysis. Freely available on the web at http://okunlab.wix.com/okunlab as a MATLAB application. eitan.okun@biu.ac.ilSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Unbiased reduced density matrices and electronic properties from full configuration interaction quantum Monte Carlo

    International Nuclear Information System (INIS)

    Overy, Catherine; Blunt, N. S.; Shepherd, James J.; Booth, George H.; Cleland, Deidre; Alavi, Ali

    2014-01-01

    Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems

  10. Encoding mutually unbiased bases in orbital angular momentum for quantum key distribution

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2013-07-01

    Full Text Available We encode mutually unbiased bases (MUBs) using the higher-dimensional orbital angular momentum (OAM) degree of freedom associated with optical fields. We illustrate how these states are encoded with the use of a spatial light modulator (SLM). We...

  11. Unbiased water and methanol maser surveys of NGC 1333

    Energy Technology Data Exchange (ETDEWEB)

    Lyo, A-Ran; Kim, Jongsoo; Byun, Do-Young; Lee, Ho-Gyu, E-mail: arl@kasi.re.kr [Korea Astronomy and Space Science Institute, 776, Daedeokdae-ro Yuseong-gu, Daejeon 305-348 (Korea, Republic of)

    2014-11-01

    We present the results of unbiased 22 GHz H{sub 2}O water and 44 GHz class I CH{sub 3}OH methanol maser surveys in the central 7' × 10' area of NGC 1333 and two additional mapping observations of a 22 GHz water maser in a ∼3' × 3' area of the IRAS4A region. In the 22 GHz water maser survey of NGC 1333 with a sensitivity of σ ∼ 0.3 Jy, we confirmed the detection of masers toward H{sub 2}O(B) in the region of HH 7-11 and IRAS4B. We also detected new water masers located ∼20'' away in the western direction of IRAS4B or ∼25'' away in the southern direction of IRAS4A. We could not, however, find young stellar objects or molecular outflows associated with them. They showed two different velocity components of ∼0 and ∼16 km s{sup –1}, which are blue- and redshifted relative to the adopted systemic velocity of ∼7 km s{sup –1} for NGC 1333. They also showed time variabilities in both intensity and velocity from multi-epoch observations and an anti-correlation between the intensities of the blue- and redshifted velocity components. We suggest that the unidentified power source of these masers might be found in the earliest evolutionary stage of star formation, before the onset of molecular outflows. Finding this kind of water maser is only possible through an unbiased blind survey. In the 44 GHz methanol maser survey with a sensitivity of σ ∼ 0.5 Jy, we confirmed masers toward IRAS4A2 and the eastern shock region of IRAS2A. Both sources are also detected in 95 and 132 GHz methanol maser lines. In addition, we had new detections of methanol masers at 95 and 132 GHz toward IRAS4B. In terms of the isotropic luminosity, we detected methanol maser sources brighter than ∼5 × 10{sup 25} erg s{sup –1} from our unbiased survey.

  12. Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.

    Science.gov (United States)

    Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis

    2015-04-01

    Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Unbiased estimators for spatial distribution functions of classical fluids

    Science.gov (United States)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  14. Unbiased stereological methods used for the quantitative evaluation of guided bone regeneration

    DEFF Research Database (Denmark)

    Aaboe, Else Merete; Pinholt, E M; Schou, S

    1998-01-01

    The present study describes the use of unbiased stereological methods for the quantitative evaluation of the amount of regenerated bone. Using the principle of guided bone regeneration the amount of regenerated bone after placement of degradable or non-degradable membranes covering defects...

  15. Mutually unbiased coarse-grained measurements of two or more phase-space variables

    Science.gov (United States)

    Paul, E. C.; Walborn, S. P.; Tasca, D. S.; Rudnicki, Łukasz

    2018-05-01

    Mutual unbiasedness of the eigenstates of phase-space operators—such as position and momentum, or their standard coarse-grained versions—exists only in the limiting case of infinite squeezing. In Phys. Rev. Lett. 120, 040403 (2018), 10.1103/PhysRevLett.120.040403, it was shown that mutual unbiasedness can be recovered for periodic coarse graining of these two operators. Here we investigate mutual unbiasedness of coarse-grained measurements for more than two phase-space variables. We show that mutual unbiasedness can be recovered between periodic coarse graining of any two nonparallel phase-space operators. We illustrate these results through optics experiments, using the fractional Fourier transform to prepare and measure mutually unbiased phase-space variables. The differences between two and three mutually unbiased measurements is discussed. Our results contribute to bridging the gap between continuous and discrete quantum mechanics, and they could be useful in quantum-information protocols.

  16. Personalized recommendation based on unbiased consistence

    Science.gov (United States)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  17. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  18. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  19. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  20. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  1. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  2. Towards an unbiased comparison of CC, BCC, and FCC lattices in terms of prealiasing

    KAUST Repository

    Vad, Viktor

    2014-06-01

    In the literature on optimal regular volume sampling, the Body-Centered Cubic (BCC) lattice has been proven to be optimal for sampling spherically band-limited signals above the Nyquist limit. On the other hand, if the sampling frequency is below the Nyquist limit, the Face-Centered Cubic (FCC) lattice was demonstrated to be optimal in reducing the prealiasing effect. In this paper, we confirm that the FCC lattice is indeed optimal in this sense in a certain interval of the sampling frequency. By theoretically estimating the prealiasing error in a realistic range of the sampling frequency, we show that in other frequency intervals, the BCC lattice and even the traditional Cartesian Cubic (CC) lattice are expected to minimize the prealiasing. The BCC lattice is superior over the FCC lattice if the sampling frequency is not significantly below the Nyquist limit. Interestingly, if the original signal is drastically undersampled, the CC lattice is expected to provide the lowest prealiasing error. Additionally, we give a comprehensible clarification that the sampling efficiency of the FCC lattice is lower than that of the BCC lattice. Although this is a well-known fact, the exact percentage has been erroneously reported in the literature. Furthermore, for the sake of an unbiased comparison, we propose to rotate the Marschner-Lobb test signal such that an undue advantage is not given to either lattice. © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  3. Towards an unbiased comparison of CC, BCC, and FCC lattices in terms of prealiasing

    KAUST Repository

    Vad, Viktor; Csé bfalvi, Balá zs; Rautek, Peter; Grö ller, Eduard M.

    2014-01-01

    In the literature on optimal regular volume sampling, the Body-Centered Cubic (BCC) lattice has been proven to be optimal for sampling spherically band-limited signals above the Nyquist limit. On the other hand, if the sampling frequency is below the Nyquist limit, the Face-Centered Cubic (FCC) lattice was demonstrated to be optimal in reducing the prealiasing effect. In this paper, we confirm that the FCC lattice is indeed optimal in this sense in a certain interval of the sampling frequency. By theoretically estimating the prealiasing error in a realistic range of the sampling frequency, we show that in other frequency intervals, the BCC lattice and even the traditional Cartesian Cubic (CC) lattice are expected to minimize the prealiasing. The BCC lattice is superior over the FCC lattice if the sampling frequency is not significantly below the Nyquist limit. Interestingly, if the original signal is drastically undersampled, the CC lattice is expected to provide the lowest prealiasing error. Additionally, we give a comprehensible clarification that the sampling efficiency of the FCC lattice is lower than that of the BCC lattice. Although this is a well-known fact, the exact percentage has been erroneously reported in the literature. Furthermore, for the sake of an unbiased comparison, we propose to rotate the Marschner-Lobb test signal such that an undue advantage is not given to either lattice. © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  4. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    Science.gov (United States)

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2018-02-01

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  5. Black-Box Search by Unbiased Variation

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Witt, Carsten

    2012-01-01

    The complexity theory for black-box algorithms, introduced by Droste, Jansen, and Wegener (Theory Comput. Syst. 39:525–544, 2006), describes common limits on the efficiency of a broad class of randomised search heuristics. There is an obvious trade-off between the generality of the black-box model...... and the strength of the bounds that can be proven in such a model. In particular, the original black-box model provides for well-known benchmark problems relatively small lower bounds, which seem unrealistic in certain cases and are typically not met by popular search heuristics.In this paper, we introduce a more...... restricted black-box model for optimisation of pseudo-Boolean functions which we claim captures the working principles of many randomised search heuristics including simulated annealing, evolutionary algorithms, randomised local search, and others. The key concept worked out is an unbiased variation operator...

  6. IR Observations of a Complete Unbiased Sample of Bright Seyfert Galaxies

    Science.gov (United States)

    Malkan, Matthew; Bendo, George; Charmandaris, Vassilis; Smith, Howard; Spinoglio, Luigi; Tommasin, Silvia

    2008-03-01

    IR spectra will measure the 2 main energy-generating processes by which galactic nuclei shine: black hole accretion and star formation. Both of these play roles in galaxy evolution, and they appear connected. To obtain a complete sample of AGN, covering the range of luminosities and column-densities, we will combine 2 complete all-sky samples with complementary selections, minimally biased by dust obscuration: the 116 IRAS 12um AGN and the 41 Swift/BAT hard Xray AGN. These galaxies have been extensively studied across the entire EM spectrum. Herschel observations have been requested and will be synergistic with the Spitzer database. IRAC and MIPS imaging will allow us to separate the nuclear and galactic continua. We are completing full IR observations of the local AGN population, most of which have already been done. The only remaining observations we request are 10 IRS/HIRES, 57 MIPS-24 and 30 IRAC pointings. These high-quality observations of bright AGN in the bolometric-flux-limited samples should be completed, for the high legacy value of complete uniform datasets. We will measure quantitatively the emission at each wavelength arising from stars and from accretion in each galactic center. Since our complete samples come from flux-limited all-sky surveys in the IR and HX, we will calculate the bi-variate AGN and star formation Luminosity Functions for the local population of active galaxies, for comparison with higher redshifts.Our second aim is to understand the physical differences between AGN classes. This requires statistical comparisons of full multiwavelength observations of complete representative samples. If the difference between Sy1s and Sy2s is caused by orientation, their isotropic properties, including those of the surrounding galactic centers, should be similar. In contrast, if they are different evolutionary stages following a galaxy encounter, then we may find observational evidence that the circumnuclear ISM of Sy2s is relatively younger.

  7. Pyroelectric photovoltaic spatial solitons in unbiased photorefractive crystals

    International Nuclear Information System (INIS)

    Jiang, Qichang; Su, Yanli; Ji, Xuanmang

    2012-01-01

    A new type of spatial solitons i.e. pyroelectric photovoltaic spatial solitons based on the combination of pyroelectric and photovoltaic effect is predicted theoretically. It shows that bright, dark and grey spatial solitons can exist in unbiased photovoltaic photorefractive crystals with appreciable pyroelectric effect. Especially, the bright soliton can form in self-defocusing photovoltaic crystals if it gives larger self-focusing pyroelectric effect. -- Highlights: ► A new type of spatial soliton i.e. pyroelectric photovoltaic spatial soliton is predicted. ► The bright, dark and grey pyroelectric photovoltaic spatial soliton can form. ► The bright soliton can also exist in self-defocusing photovoltaic crystals.

  8. On the mathematical foundations of mutually unbiased bases

    Science.gov (United States)

    Thas, Koen

    2018-02-01

    In order to describe a setting to handle Zauner's conjecture on mutually unbiased bases (MUBs) (stating that in C^d, a set of MUBs of the theoretical maximal size d + 1 exists only if d is a prime power), we pose some fundamental questions which naturally arise. Some of these questions have important consequences for the construction theory of (new) sets of maximal MUBs. Partial answers will be provided in particular cases; more specifically, we will analyze MUBs with associated operator groups that have nilpotence class 2, and consider MUBs of height 1. We will also confirm Zauner's conjecture for MUBs with associated finite nilpotent operator groups.

  9. An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index

    DEFF Research Database (Denmark)

    Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle

    2013-01-01

    We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...

  10. Analysis of large soil samples for actinides

    Science.gov (United States)

    Maxwell, III; Sherrod, L [Aiken, SC

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  11. Inverse sampled Bernoulli (ISB) procedure for estimating a population proportion, with nuclear material applications

    International Nuclear Information System (INIS)

    Wright, T.

    1982-01-01

    A new sampling procedure is introduced for estimating a population proportion. The procedure combines the ideas of inverse binomial sampling and Bernoulli sampling. An unbiased estimator is given with its variance. The procedure can be viewed as a generalization of inverse binomial sampling

  12. Collision energy alteration during mass spectrometric acquisition is essential to ensure unbiased metabolomic analysis

    CSIR Research Space (South Africa)

    Madala, NE

    2012-08-01

    Full Text Available Metabolomics entails identification and quantification of all metabolites within a biological system with a given physiological status; as such, it should be unbiased. A variety of techniques are used to measure the metabolite content of living...

  13. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  14. Unbiased in-depth characterization of CEX fractions from a stressed monoclonal antibody by mass spectrometry.

    Science.gov (United States)

    Griaud, François; Denefeld, Blandine; Lang, Manuel; Hensinger, Héloïse; Haberl, Peter; Berg, Matthias

    2017-07-01

    Characterization of charge-based variants by mass spectrometry (MS) is required for the analytical development of a new biologic entity and its marketing approval by health authorities. However, standard peak-based data analysis approaches are time-consuming and biased toward the detection, identification, and quantification of main variants only. The aim of this study was to characterize in-depth acidic and basic species of a stressed IgG1 monoclonal antibody using comprehensive and unbiased MS data evaluation tools. Fractions collected from cation ion exchange (CEX) chromatography were analyzed as intact, after reduction of disulfide bridges, and after proteolytic cleavage using Lys-C. Data of both intact and reduced samples were evaluated consistently using a time-resolved deconvolution algorithm. Peptide mapping data were processed simultaneously, quantified and compared in a systematic manner for all MS signals and fractions. Differences observed between the fractions were then further characterized and assigned. Time-resolved deconvolution enhanced pattern visualization and data interpretation of main and minor modifications in 3-dimensional maps across CEX fractions. Relative quantification of all MS signals across CEX fractions before peptide assignment enabled the detection of fraction-specific chemical modifications at abundances below 1%. Acidic fractions were shown to be heterogeneous, containing antibody fragments, glycated as well as deamidated forms of the heavy and light chains. In contrast, the basic fractions contained mainly modifications of the C-terminus and pyroglutamate formation at the N-terminus of the heavy chain. Systematic data evaluation was performed to investigate multiple data sets and comprehensively extract main and minor differences between each CEX fraction in an unbiased manner.

  15. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  16. A spinner magnetometer for large Apollo lunar samples

    Science.gov (United States)

    Uehara, M.; Gattacceca, J.; Quesnel, Y.; Lepaulard, C.; Lima, E. A.; Manfredi, M.; Rochette, P.

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10-7 Am2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  17. A spinner magnetometer for large Apollo lunar samples.

    Science.gov (United States)

    Uehara, M; Gattacceca, J; Quesnel, Y; Lepaulard, C; Lima, E A; Manfredi, M; Rochette, P

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10 -7 Am 2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  18. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    Science.gov (United States)

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  19. Large Sample Neutron Activation Analysis: A Challenge in Cultural Heritage Studies

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Tzika, F.

    2007-01-01

    Large sample neutron activation analysis compliments and significantly extends the analytical tools available for cultural heritage and authentication studies providing unique applications of non-destructive, multi-element analysis of materials that are too precious to damage for sampling purposes, representative sampling of heterogeneous materials or even analysis of whole objects. In this work, correction factors for neutron self-shielding, gamma-ray attenuation and volume distribution of the activity in large volume samples composed of iron and ceramic material were derived. Moreover, the effect of inhomogeneity on the accuracy of the technique was examined

  20. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Science.gov (United States)

    Höhna, Sebastian

    2014-01-01

    Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be

  1. Random Walks on Directed Networks: Inference and Respondent-Driven Sampling

    Directory of Open Access Journals (Sweden)

    Malmros Jens

    2016-06-01

    Full Text Available Respondent-driven sampling (RDS is often used to estimate population properties (e.g., sexual risk behavior in hard-to-reach populations. In RDS, already sampled individuals recruit population members to the sample from their social contacts in an efficient snowball-like sampling procedure. By assuming a Markov model for the recruitment of individuals, asymptotically unbiased estimates of population characteristics can be obtained. Current RDS estimation methodology assumes that the social network is undirected, that is, all edges are reciprocal. However, empirical social networks in general also include a substantial number of nonreciprocal edges. In this article, we develop an estimation method for RDS in populations connected by social networks that include reciprocal and nonreciprocal edges. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing edges of sampled individuals. The proposed estimators are evaluated on artificial and empirical networks and are shown to generally perform better than existing estimators. This is the case in particular when the fraction of directed edges in the network is large.

  2. Automated and unbiased classification of chemical profiles from fungi using high performance liquid chromatography

    DEFF Research Database (Denmark)

    Hansen, Michael Edberg; Andersen, Birgitte; Smedsgaard, Jørn

    2005-01-01

    In this paper we present a method for unbiased/unsupervised classification and identification of closely related fungi, using chemical analysis of secondary metabolite profiles created by HPLC with UV diode array detection. For two chromatographic data matrices a vector of locally aligned full sp...

  3. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying; Stein, Michael L.

    2014-01-01

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  4. Statistically and Computationally Efficient Estimating Equations for Large Spatial Datasets

    KAUST Repository

    Sun, Ying

    2014-11-07

    For Gaussian process models, likelihood based methods are often difficult to use with large irregularly spaced spatial datasets, because exact calculations of the likelihood for n observations require O(n3) operations and O(n2) memory. Various approximation methods have been developed to address the computational difficulties. In this paper, we propose new unbiased estimating equations based on score equation approximations that are both computationally and statistically efficient. We replace the inverse covariance matrix that appears in the score equations by a sparse matrix to approximate the quadratic forms, then set the resulting quadratic forms equal to their expected values to obtain unbiased estimating equations. The sparse matrix is constructed by a sparse inverse Cholesky approach to approximate the inverse covariance matrix. The statistical efficiency of the resulting unbiased estimating equations are evaluated both in theory and by numerical studies. Our methods are applied to nearly 90,000 satellite-based measurements of water vapor levels over a region in the Southeast Pacific Ocean.

  5. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  6. Monofunctional stealth nanoparticle for unbiased single molecule tracking inside living cells.

    Science.gov (United States)

    Lisse, Domenik; Richter, Christian P; Drees, Christoph; Birkholz, Oliver; You, Changjiang; Rampazzo, Enrico; Piehler, Jacob

    2014-01-01

    On the basis of a protein cage scaffold, we have systematically explored intracellular application of nanoparticles for single molecule studies and discovered that recognition by the autophagy machinery plays a key role for rapid metabolism in the cytosol. Intracellular stealth nanoparticles were achieved by heavy surface PEGylation. By combination with a generic approach for nanoparticle monofunctionalization, efficient labeling of intracellular proteins with high fidelity was accomplished, allowing unbiased long-term tracking of proteins in the outer mitochondrial membrane.

  7. Power Generation from a Radiative Thermal Source Using a Large-Area Infrared Rectenna

    Science.gov (United States)

    Shank, Joshua; Kadlec, Emil A.; Jarecki, Robert L.; Starbuck, Andrew; Howell, Stephen; Peters, David W.; Davids, Paul S.

    2018-05-01

    Electrical power generation from a moderate-temperature thermal source by means of direct conversion of infrared radiation is important and highly desirable for energy harvesting from waste heat and micropower applications. Here, we demonstrate direct rectified power generation from an unbiased large-area nanoantenna-coupled tunnel diode rectifier called a rectenna. Using a vacuum radiometric measurement technique with irradiation from a temperature-stabilized thermal source, a generated power density of 8 nW /cm2 is observed at a source temperature of 450 °C for the unbiased rectenna across an optimized load resistance. The optimized load resistance for the peak power generation for each temperature coincides with the tunnel diode resistance at zero bias and corresponds to the impedance matching condition for a rectifying antenna. Current-voltage measurements of a thermally illuminated large-area rectenna show current zero crossing shifts into the second quadrant indicating rectification. Photon-assisted tunneling in the unbiased rectenna is modeled as the mechanism for the large short-circuit photocurrents observed where the photon energy serves as an effective bias across the tunnel junction. The measured current and voltage across the load resistor as a function of the thermal source temperature represents direct current electrical power generation.

  8. Rethinking economy-wide rebound measures: An unbiased proposal

    International Nuclear Information System (INIS)

    Guerra, Ana-Isabel; Sancho, Ferran

    2010-01-01

    In spite of having been first introduced in the last half of the ninetieth century, the debate about the possible rebound effects from energy efficiency improvements is still an open question in the economic literature. This paper contributes to the existing research on this issue proposing an unbiased measure for economy-wide rebound effects. The novelty of this economy-wide rebound measure stems from the fact that not only actual energy savings but also potential energy savings are quantified under general equilibrium conditions. Our findings indicate that the use of engineering savings instead of general equilibrium potential savings downward biases economy-wide rebound effects and upward-biases backfire effects. The discrepancies between the traditional indicator and our proposed measure are analysed in the context of the Spanish economy.

  9. Suitability of the line intersect method for sampling hardwood logging residues

    Science.gov (United States)

    A. Jeff Martin

    1976-01-01

    The line intersect method of sampling logging residues was tested in Appalachian hardwoods and was found to provide unbiased estimates of the volume of residue in cubic feet per acre. Thirty-two chains of sample line were established on each of sixteen 1-acre plots on cutover areas in a variety of conditions. Estimates from these samples were then compared to actual...

  10. An Unbiased Survey of 500 Nearby Stars for Debris Disks: A JCMT Legacy Program

    NARCIS (Netherlands)

    Matthews, B.C.; Greaves, J.S.; Holland, W.S.; Wyatt, M.C.; Barlow, M.J.; Bastien, P.; Beichman, C.A.; Biggs, A.; Butner, H.M.; Dent, W.R.F.; Francesco, J. Di; Dominik, C.; Fissel, L.; Friberg, P.; Gibb, A.G.; Halpern, M.; Ivison, R.J.; Jayawardhana, R.; Jenness, T.; Johnstone, D.; Kavelaars, J.J.; Marshall, J.L.; Phillips, N.; Schieven, G.; Snellen, I.A.G.; Walker, H.J.; Ward-Thompson, D.; Weferling, B.; White, G.J.; Yates, J.; Zhu, M.; Craigon, A.

    2007-01-01

    We present the scientific motivation and observing plan for an upcoming detection survey for debris disks using the James Clerk Maxwell Telescope. The SCUBA-2 Unbiased Nearby Stars (SUNS) survey will observe 500 nearby main-sequence and subgiant stars (100 of each of the A, F, G, K, and M spectral

  11. Automated and unbiased image analyses as tools in phenotypic classification of small-spored Alternaria species

    DEFF Research Database (Denmark)

    Andersen, Birgitte; Hansen, Michael Edberg; Smedsgaard, Jørn

    2005-01-01

    often has been broadly applied to various morphologically and chemically distinct groups of isolates from different hosts. The purpose of this study was to develop and evaluate automated and unbiased image analysis systems that will analyze different phenotypic characters and facilitate testing...

  12. 105-DR Large Sodium Fire Facility decontamination, sampling, and analysis plan

    International Nuclear Information System (INIS)

    Knaus, Z.C.

    1995-01-01

    This is the decontamination, sampling, and analysis plan for the closure activities at the 105-DR Large Sodium Fire Facility at Hanford Reservation. This document supports the 105-DR Large Sodium Fire Facility Closure Plan, DOE-RL-90-25. The 105-DR LSFF, which operated from about 1972 to 1986, was a research laboratory that occupied the former ventilation supply room on the southwest side of the 105-DR Reactor facility in the 100-D Area of the Hanford Site. The LSFF was established to investigate fire fighting and safety associated with alkali metal fires in the liquid metal fast breeder reactor facilities. The decontamination, sampling, and analysis plan identifies the decontamination procedures, sampling locations, any special handling requirements, quality control samples, required chemical analysis, and data validation needed to meet the requirements of the 105-DR Large Sodium Fire Facility Closure Plan in compliance with the Resource Conservation and Recovery Act

  13. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  14. Large sample NAA facility and methodology development

    International Nuclear Information System (INIS)

    Roth, C.; Gugiu, D.; Barbos, D.; Datcu, A.; Aioanei, L.; Dobrea, D.; Taroiu, I. E.; Bucsa, A.; Ghinescu, A.

    2013-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) facility has been developed at the TRIGA- Annular Core Pulsed Reactor (ACPR) operated by the Institute for Nuclear Research in Pitesti, Romania. The central irradiation cavity of the ACPR core can accommodate a large irradiation device. The ACPR neutron flux characteristics are well known and spectrum adjustment techniques have been successfully applied to enhance the thermal component of the neutron flux in the central irradiation cavity. An analysis methodology was developed by using the MCNP code in order to estimate counting efficiency and correction factors for the major perturbing phenomena. Test experiments, comparison with classical instrumental neutron activation analysis (INAA) methods and international inter-comparison exercise have been performed to validate the new methodology. (authors)

  15. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  16. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  17. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  18. The Swift/BAT AGN Spectroscopic Survey. IX. The Clustering Environments of an Unbiased Sample of Local AGNs

    Science.gov (United States)

    Powell, M. C.; Cappelluti, N.; Urry, C. M.; Koss, M.; Finoguenov, A.; Ricci, C.; Trakhtenbrot, B.; Allevato, V.; Ajello, M.; Oh, K.; Schawinski, K.; Secrest, N.

    2018-05-01

    We characterize the environments of local accreting supermassive black holes by measuring the clustering of AGNs in the Swift/BAT Spectroscopic Survey (BASS). With 548 AGN in the redshift range 0.01 2MASS galaxies, and interpreting it via halo occupation distribution and subhalo-based models, we constrain the occupation statistics of the full sample, as well as in bins of absorbing column density and black hole mass. We find that AGNs tend to reside in galaxy group environments, in agreement with previous studies of AGNs throughout a large range of luminosity and redshift, and that on average they occupy their dark matter halos similar to inactive galaxies of comparable stellar mass. We also find evidence that obscured AGNs tend to reside in denser environments than unobscured AGNs, even when samples were matched in luminosity, redshift, stellar mass, and Eddington ratio. We show that this can be explained either by significantly different halo occupation distributions or statistically different host halo assembly histories. Lastly, we see that massive black holes are slightly more likely to reside in central galaxies than black holes of smaller mass.

  19. UNBIASED INCLINATION DISTRIBUTIONS FOR OBJECTS IN THE KUIPER BELT

    International Nuclear Information System (INIS)

    Gulbis, A. A. S.; Elliot, J. L.; Adams, E. R.; Benecchi, S. D.; Buie, M. W.; Trilling, D. E.; Wasserman, L. H.

    2010-01-01

    Using data from the Deep Ecliptic Survey (DES), we investigate the inclination distributions of objects in the Kuiper Belt. We present a derivation for observational bias removal and use this procedure to generate unbiased inclination distributions for Kuiper Belt objects (KBOs) of different DES dynamical classes, with respect to the Kuiper Belt plane. Consistent with previous results, we find that the inclination distribution for all DES KBOs is well fit by the sum of two Gaussians, or a Gaussian plus a generalized Lorentzian, multiplied by sin i. Approximately 80% of KBOs are in the high-inclination grouping. We find that Classical object inclinations are well fit by sin i multiplied by the sum of two Gaussians, with roughly even distribution between Gaussians of widths 2.0 +0.6 -0.5 0 and 8.1 +2.6 -2.1 0 . Objects in different resonances exhibit different inclination distributions. The inclinations of Scattered objects are best matched by sin i multiplied by a single Gaussian that is centered at 19.1 +3.9 -3.6 0 with a width of 6.9 +4.1 -2.7 0 . Centaur inclinations peak just below 20 0 , with one exceptionally high-inclination object near 80 0 . The currently observed inclination distribution of the Centaurs is not dissimilar to that of the Scattered Extended KBOs and Jupiter-family comets, but is significantly different from the Classical and Resonant KBOs. While the sample sizes of some dynamical classes are still small, these results should begin to serve as a critical diagnostic for models of solar system evolution.

  20. Aldehyde-Selective Wacker-Type Oxidation of Unbiased Alkenes Enabled by a Nitrite Co-Catalyst

    KAUST Repository

    Wickens, Zachary K.; Morandi, Bill; Grubbs, Robert H.

    2013-01-01

    Breaking the rules: Reversal of the high Markovnikov selectivity of Wacker-type oxidations was accomplished using a nitrite co-catalyst. Unbiased aliphatic alkenes can be oxidized with high yield and aldehyde selectivity, and several functional groups are tolerated. 18O-labeling experiments indicate that the aldehydic O atom is derived from the nitrite salt.

  1. Aldehyde-Selective Wacker-Type Oxidation of Unbiased Alkenes Enabled by a Nitrite Co-Catalyst

    KAUST Repository

    Wickens, Zachary K.

    2013-09-13

    Breaking the rules: Reversal of the high Markovnikov selectivity of Wacker-type oxidations was accomplished using a nitrite co-catalyst. Unbiased aliphatic alkenes can be oxidized with high yield and aldehyde selectivity, and several functional groups are tolerated. 18O-labeling experiments indicate that the aldehydic O atom is derived from the nitrite salt.

  2. Utilization of AHWR critical facility for research and development work on large sample NAA

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Reddy, A.V.R.; Verma, S.K.; De, S.K.

    2014-01-01

    The graphite reflector position of AHWR critical facility (CF) was utilized for analysis of large size (g-kg scale) samples using internal mono standard neutron activation analysis (IM-NAA). The reactor position was characterized by cadmium ratio method using In monitor for total flux and sub cadmium to epithermal flux ratio (f). Large sample neutron activation analysis (LSNAA) work was carried out for samples of stainless steel, ancient and new clay potteries and dross. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated. Radioactive assay was carried out using high resolution gamma ray spectrometry. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. Concentrations of Au and Ag were determined in not so homogeneous three large size samples of dross. An X-Z rotary scanning unit has been installed for counting large and not so homogeneous samples. (author)

  3. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  4. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  5. Poisson sampling - The adjusted and unadjusted estimator revisited

    Science.gov (United States)

    Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas

    1998-01-01

    The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...

  6. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  7. Likelihood inference of non-constant diversification rates with incomplete taxon sampling.

    Directory of Open Access Journals (Sweden)

    Sebastian Höhna

    Full Text Available Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family and thus to maximize diversity (diversified sampling. So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa. The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa. Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model. Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species. All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear

  8. 105-DR Large sodium fire facility soil sampling data evaluation report

    International Nuclear Information System (INIS)

    Adler, J.G.

    1996-01-01

    This report evaluates the soil sampling activities, soil sample analysis, and soil sample data associated with the closure activities at the 105-DR Large Sodium Fire Facility. The evaluation compares these activities to the regulatory requirements for meeting clean closure. The report concludes that there is no soil contamination from the waste treatment activities

  9. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  10. An Unbiased Unscented Transform Based Kalman Filter for 3D Radar

    Institute of Scientific and Technical Information of China (English)

    WANGGuohong; XIUJianjuan; HEYou

    2004-01-01

    As a derivative-free alternative to the Extended Kalman filter (EKF) in the framework of state estimation, the Unscented Kalman filter (UKF) has potential applications in nonlinear filtering. By noting the fact that the unscented transform is generally biased when converting the radar measurements from spherical coordinates into Cartesian coordinates, a new filtering algorithm for 3D radar, called Unbiased unscented Kalman filter (UUKF), is proposed. The new algorithm is validated by Monte Carlo simulation runs. Simulation results show that the UUKF is more effective than the UKF, EKF and the Converted measurement Kalman filter (CMKF).

  11. Automatic sampling for unbiased and efficient stereological estimation using the proportionator in biological studies

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...... of view is given a weight (the size) proportional to the total amount of requested image analysis features in it. The fields of view sampled with known probabilities proportional to individual weight are the only ones seen by the observer who provides the correct count. Even though the image analysis...... cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas.  The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....

  12. Unbiased group-wise image registration: applications in brain fiber tract atlas construction and functional connectivity analysis.

    Science.gov (United States)

    Geng, Xiujuan; Gu, Hong; Shin, Wanyong; Ross, Thomas J; Yang, Yihong

    2011-10-01

    We propose an unbiased implicit-reference group-wise (IRG) image registration method and demonstrate its applications in the construction of a brain white matter fiber tract atlas and the analysis of resting-state functional MRI (fMRI) connectivity. Most image registration techniques pair-wise align images to a selected reference image and group analyses are performed in the reference space, which may produce bias. The proposed method jointly estimates transformations, with an elastic deformation model, registering all images to an implicit reference corresponding to the group average. The unbiased registration is applied to build a fiber tract atlas by registering a group of diffusion tensor images. Compared to reference-based registration, the IRG registration improves the fiber track overlap within the group. After applying the method in the fMRI connectivity analysis, results suggest a general improvement in functional connectivity maps at a group level in terms of larger cluster size and higher average t-scores.

  13. Associations between sociodemographic, sampling and health factors and various salivary cortisol indicators in a large sample without psychopathology

    NARCIS (Netherlands)

    Vreeburg, Sophie A.; Kruijtzer, Boudewijn P.; van Pelt, Johannes; van Dyck, Richard; DeRijk, Roel H.; Hoogendijk, Witte J. G.; Smit, Johannes H.; Zitman, Frans G.; Penninx, Brenda

    Background: Cortisol levels are increasingly often assessed in large-scale psychosomatic research. Although determinants of different salivary cortisol indicators have been described, they have not yet been systematically studied within the same study with a Large sample size. Sociodemographic,

  14. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  15. Statistical evaluations of current sampling procedures and incomplete core recovery

    International Nuclear Information System (INIS)

    Heasler, P.G.; Jensen, L.

    1994-03-01

    This document develops two formulas that describe the effects of incomplete recovery on core sampling results for the Hanford waste tanks. The formulas evaluate incomplete core recovery from a worst-case (i.e.,biased) and best-case (i.e., unbiased) perspective. A core sampler is unbiased if the sample material recovered is a random sample of the material in the tank, while any sampler that preferentially recovers a particular type of waste over others is a biased sampler. There is strong evidence to indicate that the push-mode sampler presently used at the Hanford site is a biased one. The formulas presented here show the effects of incomplete core recovery on the accuracy of composition measurements, as functions of the vertical variability in the waste. These equations are evaluated using vertical variability estimates from previously sampled tanks (B110, U110, C109). Assuming that the values of vertical variability used in this study adequately describes the Hanford tank farm, one can use the formulas to compute the effect of incomplete recovery on the accuracy of an average constituent estimate. To determine acceptable recovery limits, we have assumed that the relative error of such an estimate should be no more than 20%

  16. Unbiased minimum variance estimator of a matrix exponential function. Application to Boltzmann/Bateman coupled equations solving

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C. M.

    2009-01-01

    This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)

  17. Unbiased roughness measurements: the key to better etch performance

    Science.gov (United States)

    Liang, Andrew; Mack, Chris; Sirard, Stephen; Liang, Chen-wei; Yang, Liu; Jiang, Justin; Shamma, Nader; Wise, Rich; Yu, Jengyi; Hymes, Diane

    2018-03-01

    Edge placement error (EPE) has become an increasingly critical metric to enable Moore's Law scaling. Stochastic variations, as characterized for lines by line width roughness (LWR) and line edge roughness (LER), are dominant factors in EPE and known to increase with the introduction of EUV lithography. However, despite recommendations from ITRS, NIST, and SEMI standards, the industry has not agreed upon a methodology to quantify these properties. Thus, differing methodologies applied to the same image often result in different roughness measurements and conclusions. To standardize LWR and LER measurements, Fractilia has developed an unbiased measurement that uses a raw unfiltered line scan to subtract out image noise and distortions. By using Fractilia's inverse linescan model (FILM) to guide development, we will highlight the key influences of roughness metrology on plasma-based resist smoothing processes. Test wafers were deposited to represent a 5 nm node EUV logic stack. The patterning stack consists of a core Si target layer with spin-on carbon (SOC) as the hardmask and spin-on glass (SOG) as the cap. Next, these wafers were exposed through an ASML NXE 3350B EUV scanner with an advanced chemically amplified resist (CAR). Afterwards, these wafers were etched through a variety of plasma-based resist smoothing techniques using a Lam Kiyo conductor etch system. Dense line and space patterns on the etched samples were imaged through advanced Hitachi CDSEMs and the LER and LWR were measured through both Fractilia and an industry standard roughness measurement software. By employing Fractilia to guide plasma-based etch development, we demonstrate that Fractilia produces accurate roughness measurements on resist in contrast to an industry standard measurement software. These results highlight the importance of subtracting out SEM image noise to obtain quicker developmental cycle times and lower target layer roughness.

  18. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  19. Unbiased determination of polarized parton distributions and their uncertainties

    CERN Document Server

    Ball, Richard D.; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan

    2013-01-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, ...

  20. Absorption and folding of melittin onto lipid bilayer membranes via unbiased atomic detail microsecond molecular dynamics simulation.

    Science.gov (United States)

    Chen, Charles H; Wiedman, Gregory; Khan, Ayesha; Ulmschneider, Martin B

    2014-09-01

    Unbiased molecular simulation is a powerful tool to study the atomic details driving functional structural changes or folding pathways of highly fluid systems, which present great challenges experimentally. Here we apply unbiased long-timescale molecular dynamics simulation to study the ab initio folding and partitioning of melittin, a template amphiphilic membrane active peptide. The simulations reveal that the peptide binds strongly to the lipid bilayer in an unstructured configuration. Interfacial folding results in a localized bilayer deformation. Akin to purely hydrophobic transmembrane segments the surface bound native helical conformer is highly resistant against thermal denaturation. Circular dichroism spectroscopy experiments confirm the strong binding and thermostability of the peptide. The study highlights the utility of molecular dynamics simulations for studying transient mechanisms in fluid lipid bilayer systems. This article is part of a Special Issue entitled: Interfacially Active Peptides and Proteins. Guest Editors: William C. Wimley and Kalina Hristova. Copyright © 2014. Published by Elsevier B.V.

  1. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  2. Unextendible Mutually Unbiased Bases (after Mandayam, Bandyopadhyay, Grassl and Wootters

    Directory of Open Access Journals (Sweden)

    Koen Thas

    2016-11-01

    Full Text Available We consider questions posed in a recent paper of Mandayam et al. (2014 on the nature of “unextendible mutually unbiased bases.” We describe a conceptual framework to study these questions, using a connection proved by the author in Thas (2009 between the set of nonidentity generalized Pauli operators on the Hilbert space of N d-level quantum systems, d a prime, and the geometry of non-degenerate alternating bilinear forms of rank N over finite fields F d . We then supply alternative and short proofs of results obtained in Mandayam et al. (2014, as well as new general bounds for the problems considered in loc. cit. In this setting, we also solve Conjecture 1 of Mandayam et al. (2014 and speculate on variations of this conjecture.

  3. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  4. Detection of seizures from small samples using nonlinear dynamic system theory.

    Science.gov (United States)

    Yaylali, I; Koçak, H; Jayakar, P

    1996-07-01

    The electroencephalogram (EEG), like many other biological phenomena, is quite likely governed by nonlinear dynamics. Certain characteristics of the underlying dynamics have recently been quantified by computing the correlation dimensions (D2) of EEG time series data. In this paper, D2 of the unbiased autocovariance function of the scalp EEG data was used to detect electrographic seizure activity. Digital EEG data were acquired at a sampling rate of 200 Hz per channel and organized in continuous frames (duration 2.56 s, 512 data points). To increase the reliability of D2 computations with short duration data, raw EEG data were initially simplified using unbiased autocovariance analysis to highlight the periodic activity that is present during seizures. The D2 computation was then performed from the unbiased autocovariance function of each channel using the Grassberger-Procaccia method with Theiler's box-assisted correlation algorithm. Even with short duration data, this preprocessing proved to be computationally robust and displayed no significant sensitivity to implementation details such as the choices of embedding dimension and box size. The system successfully identified various types of seizures in clinical studies.

  5. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  6. The Herschel/HIFI unbiased spectral survey of the solar-mass protostar IRAS16293

    Science.gov (United States)

    Bottinelli, S.; Caux, E.; Cecarelli, C.; Kahane, C.

    2012-03-01

    Unbiased spectral surveys are powerful tools to study the chemistry and the physics of star forming regions, because they can provide a complete census of the molecular content and the observed lines probe the physical structure of the source. While unbiased surveys at the millimeter and sub-millimeter wavelengths observable from ground-based telescopes have previously been performed towards several high-mass protostars, very little data exist on low-mass protostars, with only one such ground-based survey carried out towards this kind of object. However, since low-mass protostars are believed to resemble our own Sun's progenitor, the information provided by spectral surveys is crucial in order to uncover the birth mechanisms of low-mass stars and hence of our Sun. To help fill up this gap in our understanding, we carried out an almost complete spectral survey towards the solar-type protostar IRAS16293-2422 with the HIFI instrument onboard Herschel. The observations covered a range of about 700 GHz, in which a few hundreds lines were detected with more than 3σ confidence interval certainty and identified. All the detected lines which were free from obvious blending effects were fitted with Gaussians to estimate their basic kinematic properties. Contrarily to what is observed in the millimeter range, no lines from complex organic molecules have been observed. In this work, we characterize the different components of IRAS16293-2422 (a known binary at least) by analyzing the numerous emission and absorption lines identified.

  7. An open-flow pulse ionization chamber for alpha spectrometry of large-area samples

    International Nuclear Information System (INIS)

    Johansson, L.; Roos, B.; Samuelsson, C.

    1992-01-01

    The presented open-flow pulse ionization chamber was developed to make alpha spectrometry on large-area surfaces easy. One side of the chamber is left open, where the sample is to be placed. The sample acts as a chamber wall and therby defeins the detector volume. The sample area can be as large as 400 cm 2 . To prevent air from entering the volume there is a constant gas flow through the detector, coming in at the bottom of the chamber and leaking at the sides of the sample. The method results in good energy resolution and has considerable applicability in the retrospective radon research. Alpha spectra obtained in the retrospective measurements descend from 210 Po, built up in the sample from the radon daughters recoiled into a glass surface. (au)

  8. Fast concentration of dissolved forms of cesium radioisotopes from large seawater samples

    International Nuclear Information System (INIS)

    Jan Kamenik; Henrieta Dulaiova; Ferdinand Sebesta; Kamila St'astna; Czech Technical University, Prague

    2013-01-01

    The method developed for cesium concentration from large freshwater samples was tested and adapted for analysis of cesium radionuclides in seawater. Concentration of dissolved forms of cesium in large seawater samples (about 100 L) was performed using composite absorbers AMP-PAN and KNiFC-PAN with ammonium molybdophosphate and potassium–nickel hexacyanoferrate(II) as active components, respectively, and polyacrylonitrile as a binding polymer. A specially designed chromatography column with bed volume (BV) 25 mL allowed fast flow rates of seawater (up to 1,200 BV h -1 ). The recovery yields were determined by ICP-MS analysis of stable cesium added to seawater sample. Both absorbers proved usability for cesium concentration from large seawater samples. KNiFC-PAN material was slightly more effective in cesium concentration from acidified seawater (recovery yield around 93 % for 700 BV h -1 ). This material showed similar efficiency in cesium concentration also from natural seawater. The activity concentrations of 137 Cs determined in seawater from the central Pacific Ocean were 1.5 ± 0.1 and 1.4 ± 0.1 Bq m -3 for an offshore (January 2012) and a coastal (February 2012) locality, respectively, 134 Cs activities were below detection limit ( -3 ). (author)

  9. Landslide susceptibility assessment in the Upper Orcia Valley (Southern Tuscany, Italy through conditional analysis: a contribution to the unbiased selection of causal factors

    Directory of Open Access Journals (Sweden)

    F. Vergari

    2011-05-01

    Full Text Available In this work the conditional multivariate analysis was applied to evaluate landslide susceptibility in the Upper Orcia River Basin (Tuscany, Italy, where widespread denudation processes and agricultural practices have a mutual impact. We introduced an unbiased procedure for causal factor selection based on some intuitive statistical indices. This procedure is aimed at detecting among different potential factors the most discriminant ones in a given study area. Moreover, this step avoids generating too small and statistically insignificant spatial units by intersecting the factor maps. Finally, a validation procedure was applied based on the partition of the landslide inventory from multi-temporal aerial photo interpretation.

    Although encompassing some sources of uncertainties, the applied susceptibility assessment method provided a satisfactory and unbiased prediction for the Upper Orcia Valley. The results confirmed the efficiency of the selection procedure, as an unbiased step of the landslide susceptibility evaluation. Furthermore, we achieved the purpose of presenting a conceptually simple but, at the same time, effective statistical procedure for susceptibility analysis to be used as well by decision makers in land management.

  10. Reliability of different sampling densities for estimating and mapping lichen diversity in biomonitoring studies

    International Nuclear Information System (INIS)

    Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.

    2004-01-01

    Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design

  11. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    Science.gov (United States)

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  12. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Toward a Principled Sampling Theory for Quasi-Orders.

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  14. Toward a Principled Sampling Theory for Quasi-Orders

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  15. Unbiased Scanning Method and Data Banking Approach Using Ultra-High Performance Liquid Chromatography Coupled with High-Resolution Mass Spectrometry for Quantitative Comparison of Metabolite Exposure in Plasma across Species Analyzed at Different Dates.

    Science.gov (United States)

    Gao, Hongying; Deng, Shibing; Obach, R Scott

    2015-12-01

    An unbiased scanning methodology using ultra high-performance liquid chromatography coupled with high-resolution mass spectrometry was used to bank data and plasma samples for comparing the data generated at different dates. This method was applied to bank the data generated earlier in animal samples and then to compare the exposure to metabolites in animal versus human for safety assessment. With neither authentic standards nor prior knowledge of the identities and structures of metabolites, full scans for precursor ions and all ion fragments (AIF) were employed with a generic gradient LC method to analyze plasma samples at positive and negative polarity, respectively. In a total of 22 tested drugs and metabolites, 21 analytes were detected using this unbiased scanning method except that naproxen was not detected due to low sensitivity at negative polarity and interference at positive polarity; and 4'- or 5-hydroxy diclofenac was not separated by a generic UPLC method. Statistical analysis of the peak area ratios of the analytes versus the internal standard in five repetitive analyses over approximately 1 year demonstrated that the analysis variation was significantly different from sample instability. The confidence limits for comparing the exposure using peak area ratio of metabolites in animal plasma versus human plasma measured over approximately 1 year apart were comparable to the analysis undertaken side by side on the same days. These statistical analysis results showed it was feasible to compare data generated at different dates with neither authentic standards nor prior knowledge of the analytes.

  16. Estimating Unbiased Treatment Effects in Education Using a Regression Discontinuity Design

    Directory of Open Access Journals (Sweden)

    William C. Smith

    2014-08-01

    Full Text Available The ability of regression discontinuity (RD designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education Statistics to meet the prerequisites of a causal relationship. Unfortunately, the statistical complexity of the RD design has limited its application in education research. This article provides a less technical introduction to RD for education researchers and practitioners. Using visual analysis to aide conceptual understanding, the article walks readers through the essential steps of a Sharp RD design using hypothetical, but realistic, district intervention data and provides additional resources for further exploration.

  17. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  18. In situ sampling of small volumes of soil solution using modified micro-suction cups

    NARCIS (Netherlands)

    Shen, Jianbo; Hoffland, E.

    2007-01-01

    Two modified designs of micro-pore-water samplers were tested for their capacity to collect unbiased soil solution samples containing zinc and citrate. The samplers had either ceramic or polyethersulfone (PES) suction cups. Laboratory tests of the micro-samplers were conducted using (a) standard

  19. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    Science.gov (United States)

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  20. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  1. MOSS-5: A Fast Method of Approximating Counts of 5-Node Graphlets in Large Graphs

    KAUST Repository

    Wang, Pinghui

    2017-09-26

    Counting 3-, 4-, and 5-node graphlets in graphs is important for graph mining applications such as discovering abnormal/evolution patterns in social and biology networks. In addition, it is recently widely used for computing similarities between graphs and graph classification applications such as protein function prediction and malware detection. However, it is challenging to compute these metrics for a large graph or a large set of graphs due to the combinatorial nature of the problem. Despite recent efforts in counting triangles (a 3-node graphlet) and 4-node graphlets, little attention has been paid to characterizing 5-node graphlets. In this paper, we develop a computationally efficient sampling method to estimate 5-node graphlet counts. We not only provide fast sampling methods and unbiased estimators of graphlet counts, but also derive simple yet exact formulas for the variances of the estimators which is of great value in practice-the variances can be used to bound the estimates\\' errors and determine the smallest necessary sampling budget for a desired accuracy. We conduct experiments on a variety of real-world datasets, and the results show that our method is several orders of magnitude faster than the state-of-the-art methods with the same accuracy.

  2. Systematic versus random sampling in stereological studies.

    Science.gov (United States)

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  3. Galaxy redshift surveys with sparse sampling

    International Nuclear Information System (INIS)

    Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro; Jee, Inh; Jeong, Donghui; Blanc, Guillermo A.; Ciardullo, Robin; Gronwall, Caryl; Hagen, Alex; Schneider, Donald P.; Drory, Niv; Fabricius, Maximilian; Landriau, Martin; Finkelstein, Steven; Jogee, Shardha; Cooper, Erin Mentuch; Tuttle, Sarah; Gebhardt, Karl; Hill, Gary J.

    2013-01-01

    Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V survey ∼ 10Gpc 3 ) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V survey , we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V survey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V survey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys

  4. Automated, feature-based image alignment for high-resolution imaging mass spectrometry of large biological samples

    NARCIS (Netherlands)

    Broersen, A.; Liere, van R.; Altelaar, A.F.M.; Heeren, R.M.A.; McDonnell, L.A.

    2008-01-01

    High-resolution imaging mass spectrometry of large biological samples is the goal of several research groups. In mosaic imaging, the most common method, the large sample is divided into a mosaic of small areas that are then analyzed with high resolution. Here we present an automated alignment

  5. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  6. Empirical Bayes Estimation of Semi-parametric Hierarchical Mixture Models for Unbiased Characterization of Polygenic Disease Architectures

    Directory of Open Access Journals (Sweden)

    Jo Nishino

    2018-04-01

    Full Text Available Genome-wide association studies (GWAS suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1. For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases.

  7. Absolute activity determinations on large volume geological samples independent of self-absorption effects

    International Nuclear Information System (INIS)

    Wilson, W.E.

    1980-01-01

    This paper describes a method for measuring the absolute activity of large volume samples by γ-spectroscopy independent of self-absorption effects using Ge detectors. The method yields accurate matrix independent results at the expense of replicative counting of the unknown sample. (orig./HP)

  8. Mutually unbiased bases and trinary operator sets for N qutrits

    International Nuclear Information System (INIS)

    Lawrence, Jay

    2004-01-01

    A compete orthonormal basis of N-qutrit unitary operators drawn from the Pauli group consists of the identity and 9 N -1 traceless operators. The traceless ones partition into 3 N +1 maximally commuting subsets (MCS's) of 3 N -1 operators each, whose joint eigenbases are mutually unbiased. We prove that Pauli factor groups of order 3 N are isomorphic to all MCS's and show how this result applies in specific cases. For two qutrits, the 80 traceless operators partition into 10 MCS's. We prove that 4 of the corresponding basis sets must be separable, while 6 must be totally entangled (and Bell-like). For three qutrits, 728 operators partition into 28 MCS's with less rigid structure, allowing for the coexistence of separable, partially entangled, and totally entangled (GHZ-like) bases. However a minimum of 16 GHZ-like bases must occur. Every basis state is described by an N-digit trinary number consisting of the eigenvalues of N observables constructed from the corresponding MCS

  9. Topography of acute stroke in a sample of 439 right brain damaged patients.

    Science.gov (United States)

    Sperber, Christoph; Karnath, Hans-Otto

    2016-01-01

    Knowledge of the typical lesion topography and volumetry is important for clinical stroke diagnosis as well as for anatomo-behavioral lesion mapping analyses. Here we used modern lesion analysis techniques to examine the naturally occurring lesion patterns caused by ischemic and by hemorrhagic infarcts in a large, representative acute stroke patient sample. Acute MR and CT imaging of 439 consecutively admitted right-hemispheric stroke patients from a well-defined catchment area suffering from ischemia (n = 367) or hemorrhage (n = 72) were normalized and mapped in reference to stereotaxic anatomical atlases. For ischemic infarcts, highest frequencies of stroke were observed in the insula, putamen, operculum and superior temporal cortex, as well as the inferior and superior occipito-frontal fascicles, superior longitudinal fascicle, uncinate fascicle, and the acoustic radiation. The maximum overlay of hemorrhages was located more posteriorly and more medially, involving posterior areas of the insula, Heschl's gyrus, and putamen. Lesion size was largest in frontal and anterior areas and lowest in subcortical and posterior areas. The large and unbiased sample of stroke patients used in the present study accumulated the different sub-patterns to identify the global topographic and volumetric pattern of right hemisphere stroke in humans.

  10. Towards an unbiased, full-sky clustering search with IceCube in real time

    Energy Technology Data Exchange (ETDEWEB)

    Bernardini, Elisa; Franckowiak, Anna; Kintscher, Thomas; Kowalski, Marek; Stasik, Alexander [DESY, Zeuthen (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The IceCube neutrino observatory is a 1 km{sup 3} detector for Cherenkov light in the ice at the South Pole. Having observed the presence of a diffuse astrophysical neutrino flux, static point source searches have come up empty handed. Thus, transient and variable objects emerge as promising, detectable source candidates. An unbiased, full-sky clustering search - run in real time - can find neutrino events with close temporal and spatial proximity. The most significant of these clusters serve as alerts to third-party observatories in order to obtain a complete picture of cosmic accelerators. The talk showcases the status and prospects of this project.

  11. Procedure for plutonium analysis of large (100g) soil and sediment samples

    International Nuclear Information System (INIS)

    Meadows, J.W.T.; Schweiger, J.S.; Mendoza, B.; Stone, R.

    1975-01-01

    A method for the complete dissolution of large soil or sediment samples is described. This method is in routine usage at Lawrence Livermore Laboratory for the analysis of fall-out levels of Pu in soils and sediments. Intercomparison with partial dissolution (leach) techniques shows the complete dissolution method to be superior for the determination of plutonium in a wide variety of environmental samples. (author)

  12. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  13. 17 CFR Appendix B to Part 420 - Sample Large Position Report

    Science.gov (United States)

    2010-04-01

    ..., and as collateral for financial derivatives and other securities transactions $ Total Memorandum 1... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Sample Large Position Report B Appendix B to Part 420 Commodity and Securities Exchanges DEPARTMENT OF THE TREASURY REGULATIONS UNDER...

  14. Unbiased estimators of coincidence and correlation in non-analogous Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Szieberth, M.; Kloosterman, J.L.

    2014-01-01

    Highlights: • The history splitting method was developed for non-Boltzmann Monte Carlo estimators. • The method allows variance reduction for pulse-height and higher moment estimators. • It works in highly multiplicative problems but Russian roulette has to be replaced. • Estimation of higher moments allows the simulation of neutron noise measurements. • Biased sampling of fission helps the effective simulation of neutron noise methods. - Abstract: The conventional non-analogous Monte Carlo methods are optimized to preserve the mean value of the distributions. Therefore, they are not suited to non-Boltzmann problems such as the estimation of coincidences or correlations. This paper presents a general method called history splitting for the non-analogous estimation of such quantities. The basic principle of the method is that a non-analogous particle history can be interpreted as a collection of analogous histories with different weights according to the probability of their realization. Calculations with a simple Monte Carlo program for a pulse-height-type estimator prove that the method is feasible and provides unbiased estimation. Different variance reduction techniques have been tried with the method and Russian roulette turned out to be ineffective in high multiplicity systems. An alternative history control method is applied instead. Simulation results of an auto-correlation (Rossi-α) measurement show that even the reconstruction of the higher moments is possible with the history splitting method, which makes the simulation of neutron noise measurements feasible

  15. Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures

    International Nuclear Information System (INIS)

    Procacci, Piero

    2015-01-01

    In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of only two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems

  16. Large area synchrotron X-ray fluorescence mapping of biological samples

    International Nuclear Information System (INIS)

    Kempson, I.; Thierry, B.; Smith, E.; Gao, M.; De Jonge, M.

    2014-01-01

    Large area mapping of inorganic material in biological samples has suffered severely from prohibitively long acquisition times. With the advent of new detector technology we can now generate statistically relevant information for studying cell populations, inter-variability and bioinorganic chemistry in large specimen. We have been implementing ultrafast synchrotron-based XRF mapping afforded by the MAIA detector for large area mapping of biological material. For example, a 2.5 million pixel map can be acquired in 3 hours, compared to a typical synchrotron XRF set-up needing over 1 month of uninterrupted beamtime. Of particular focus to us is the fate of metals and nanoparticles in cells, 3D tissue models and animal tissues. The large area scanning has for the first time provided statistically significant information on sufficiently large numbers of cells to provide data on intercellular variability in uptake of nanoparticles. Techniques such as flow cytometry generally require analysis of thousands of cells for statistically meaningful comparison, due to the large degree of variability. Large area XRF now gives comparable information in a quantifiable manner. Furthermore, we can now image localised deposition of nanoparticles in tissues that would be highly improbable to 'find' by typical XRF imaging. In addition, the ultra fast nature also makes it viable to conduct 3D XRF tomography over large dimensions. This technology avails new opportunities in biomonitoring and understanding metal and nanoparticle fate ex-vivo. Following from this is extension to molecular imaging through specific anti-body targeted nanoparticles to label specific tissues and monitor cellular process or biological consequence

  17. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    Science.gov (United States)

    van Dam, Wim; Howard, Mark

    2011-07-01

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiołkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.

  18. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    International Nuclear Information System (INIS)

    Dam, Wim van; Howard, Mark

    2011-01-01

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiolkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.

  19. Sampling of charged liquid radwaste stored in large tanks

    International Nuclear Information System (INIS)

    Tchemitcheff, E.; Domage, M.; Bernard-Bruls, X.

    1995-01-01

    The final safe disposal of radwaste, in France and elsewhere, entails, for liquid effluents, their conversion to a stable solid form, hence implying their conditioning. The production of conditioned waste with the requisite quality, traceability of the characteristics of the packages produced, and safe operation of the conditioning processes, implies at least the accurate knowledge of the chemical and radiochemical properties of the effluents concerned. The problem in sampling the normally charged effluents is aggravated for effluents that have been stored for several years in very large tanks, without stirring and retrieval systems. In 1992, SGN was asked by Cogema to study the retrieval and conditioning of LL/ML chemical sludge and spent ion-exchange resins produced in the operation of the UP2 400 plant at La Hague, and stored temporarily in rectangular silos and tanks. The sampling aspect was crucial for validating the inventories, identifying the problems liable to arise in the aging of the effluents, dimensioning the retrieval systems and checking the transferability and compatibility with the downstream conditioning process. Two innovative self-contained systems were developed and built for sampling operations, positioned above the tanks concerned. Both systems have been operated in active conditions and have proved totally satisfactory for taking representative samples. Today SGN can propose industrially proven overall solutions, adaptable to the various constraints of many spent fuel cycle operators

  20. Development of Large Sample Neutron Activation Technique for New Applications in Thailand

    International Nuclear Information System (INIS)

    Laoharojanaphand, S.; Tippayakul, C.; Wonglee, S.; Channuie, J.

    2018-01-01

    The development of the Large Sample Neutron Activation Analysis (LSNAA) in Thailand is presented in this paper. The technique had been firstly developed with rice sample as the test subject. The Thai Research Reactor-1/Modification 1 (TRR-1/M1) was used as the neutron source. The first step was to select and characterize an appropriate irradiation facility for the research. An out-core irradiation facility (A4 position) was first attempted. The results performed with the A4 facility were then used as guides for the subsequent experiments with the thermal column facility. The characterization of the thermal column was performed with Cu-wire to determine spatial distribution without and with rice sample. The flux depression without rice sample was observed to be less than 30% while the flux depression with rice sample increased to within 60%. The flux monitors internal to the rice sample were used to determine average flux over the rice sample. The gamma selfshielding effect during gamma measurement was corrected using the Monte Carlo simulation. The ratio between the efficiencies of the volume source and the point source for each energy point was calculated by the MCNPX code. The research team adopted the k0-NAA methodology to calculate the element concentration in the research. The k0-NAA program which developed by IAEA was set up to simulate the conditions of the irradiation and measurement facilities used in this research. The element concentrations in the bulk rice sample were then calculated taking into account the flux depression and gamma efficiency corrections. At the moment, the results still show large discrepancies with the reference values. However, more research on the validation will be performed to identify sources of errors. Moreover, this LS-NAA technique was introduced for the activation analysis of the IAEA archaeological mock-up. The results are provided in this report. (author)

  1. CO2 isotope analyses using large air samples collected on intercontinental flights by the CARIBIC Boeing 767

    NARCIS (Netherlands)

    Assonov, S.S.; Brenninkmeijer, C.A.M.; Koeppel, C.; Röckmann, T.

    2009-01-01

    Analytical details for 13C and 18O isotope analyses of atmospheric CO2 in large air samples are given. The large air samples of nominally 300 L were collected during the passenger aircraft-based atmospheric chemistry research project CARIBIC and analyzed for a large number of trace gases and

  2. The problem of large samples. An activation analysis study of electronic waste material

    International Nuclear Information System (INIS)

    Segebade, C.; Goerner, W.; Bode, P.

    2007-01-01

    Large-volume instrumental photon activation analysis (IPAA) was used for the investigation of shredded electronic waste material. Sample masses from 1 to 150 grams were analyzed to obtain an estimate of the minimum sample size to be taken to achieve a representativeness of the results which is satisfactory for a defined investigation task. Furthermore, the influence of irradiation and measurement parameters upon the quality of the analytical results were studied. Finally, the analytical data obtained from IPAA and instrumental neutron activation analysis (INAA), both carried out in a large-volume mode, were compared. Only parts of the values were found in satisfactory agreement. (author)

  3. The Importance of Contamination Knowledge in Curation - Insights into Mars Sample Return

    Science.gov (United States)

    Harrington, A. D.; Calaway, M. J.; Regberg, A. B.; Mitchell, J. L.; Fries, M. D.; Zeigler, R. A.; McCubbin, F. M.

    2018-01-01

    The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (JSC), in Houston, TX (henceforth Curation Office) manages the curation of extraterrestrial samples returned by NASA missions and shared collections from international partners, preserving their integrity for future scientific study while providing the samples to the international community in a fair and unbiased way. The Curation Office also curates flight and non-flight reference materials and other materials from spacecraft assembly (e.g., lubricants, paints and gases) of sample return missions that would have the potential to cross-contaminate a present or future NASA astromaterials collection.

  4. Large sample neutron activation analysis: establishment at CDTN/CNEN, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Maria Angela de B.C., E-mail: menezes@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Jacimovic, Radojko, E-mail: radojko.jacimovic@ijs.s [Jozef Stefan Institute, Ljubljana (Slovenia). Dept. of Environmental Sciences. Group for Radiochemistry and Radioecology

    2011-07-01

    In order to improve the application of the neutron activation technique at CDTN/CNEN, the large sample instrumental neutron activation analysis is being established, IAEA BRA 14798 and FAPEMIG APQ-01259-09 projects. This procedure, LS-INAA, usually requires special facilities for the activation as well as for the detection. However, the TRIGA Mark I IPR R1, CDTN/CNEN has not been adapted for the irradiation and the usual gamma spectrometry has being carried out. To start the establishment of the LS-INAA, a 5g sample - IAEA/Soil 7 reference material was analyzed by k{sub 0}-standardized method. This paper is about the detector efficiency over the volume source using KayWin v2.23 and ANGLE V3.0 software. (author)

  5. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation

    Science.gov (United States)

    Peter, Emanuel K.

    2017-12-01

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  6. Directed transport in a periodic tube driven by asymmetric unbiased forces coexisting with spatially modulated noises

    International Nuclear Information System (INIS)

    Li Fengguo; Ai Baoquan

    2011-01-01

    Graphical abstract: The current J as a function of the phase shift φ and ε at a = 1/2π, b = 0.5/2π, k B T = 0.5, α = 0.1, and F 0 = 0.5. Highlights: → Unbiased forces and spatially modulated white noises affect the current. → In the adiabatic limit, the analytical expression of directed current is obtained. → Their competition will induce current reversals. → For negative asymmetric parameters of the force, there exists an optimum parameter. → The current increases monotonously for positive asymmetric parameters. - Abstract: Transport of Brownian particles in a symmetrically periodic tube is investigated in the presence of asymmetric unbiased external forces and spatially modulated Gaussian white noises. In the adiabatic limit, we obtain the analytical expression of the directed current. It is found that the temporal asymmetry can break thermodynamic equilibrium and induce a net current. Their competition between the temporal asymmetry force and the phase shift between the noise modulation and the tube shape will induce some peculiar phenomena, for example, current reversals. The current changes with the phase shift in the form of the sine function. For negative asymmetric parameters of the force, there exists an optimum parameter at which the current takes its maximum value. However, the current increases monotonously for positive asymmetric parameters.

  7. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively

  8. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  9. Evaluation of environmental sampling methods for detection of Salmonella enterica in a large animal veterinary hospital.

    Science.gov (United States)

    Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey

    2018-04-01

    Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.

  10. SU2 nonstandard bases: the case of mutually unbiased bases

    International Nuclear Information System (INIS)

    Olivier, Albouy; Kibler, Maurice R.

    2007-02-01

    This paper deals with bases in a finite-dimensional Hilbert space. Such a space can be realized as a subspace of the representation space of SU 2 corresponding to an irreducible representation of SU 2 . The representation theory of SU 2 is reconsidered via the use of two truncated deformed oscillators. This leads to replace the familiar scheme [j 2 , j z ] by a scheme [j 2 , v ra ], where the two-parameter operator v ra is defined in the universal enveloping algebra of the Lie algebra su 2 . The eigenvectors of the commuting set of operators [j 2 , v ra ] are adapted to a tower of chains SO 3 includes C 2j+1 (2j belongs to N * ), where C 2j+1 is the cyclic group of order 2j + 1. In the case where 2j + 1 is prime, the corresponding eigenvectors generate a complete set of mutually unbiased bases. Some useful relations on generalized quadratic Gauss sums are exposed in three appendices. (authors)

  11. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  12. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  13. Unbiased total electron content (UTEC), their fluctuations, and correlation with seismic activity over Japan

    Science.gov (United States)

    Cornely, Pierre-Richard; Hughes, John

    2018-02-01

    Earthquakes are among the most dangerous events that occur on earth and many scientists have been investigating the underlying processes that take place before earthquakes occur. These investigations are fueling efforts towards developing both single and multiple parameter earthquake forecasting methods based on earthquake precursors. One potential earthquake precursor parameter that has received significant attention within the last few years is the ionospheric total electron content (TEC). Despite its growing popularity as an earthquake precursor, TEC has been under great scrutiny because of the underlying biases associated with the process of acquiring and processing TEC data. Future work in the field will need to demonstrate our ability to acquire TEC data with the least amount of biases possible thereby preserving the integrity of the data. This paper describes a process for removing biases using raw TEC data from the standard Rinex files obtained from any global positioning satellites system. The process is based on developing an unbiased TEC (UTEC) data and model that can be more adaptable to serving as a precursor signal for earthquake forecasting. The model was used during the days and hours leading to the earthquake off the coast of Tohoku, Japan on March 11, 2011 with interesting results. The model takes advantage of the large amount of data available from the GPS Earth Observation Network of Japan to display near real-time UTEC data as the earthquake approaches and for a period of time after the earthquake occurred.

  14. Small sample GEE estimation of regression parameters for longitudinal data.

    Science.gov (United States)

    Paul, Sudhir; Zhang, Xuemao

    2014-09-28

    Longitudinal (clustered) response data arise in many bio-statistical applications which, in general, cannot be assumed to be independent. Generalized estimating equation (GEE) is a widely used method to estimate marginal regression parameters for correlated responses. The advantage of the GEE is that the estimates of the regression parameters are asymptotically unbiased even if the correlation structure is misspecified, although their small sample properties are not known. In this paper, two bias adjusted GEE estimators of the regression parameters in longitudinal data are obtained when the number of subjects is small. One is based on a bias correction, and the other is based on a bias reduction. Simulations show that the performances of both the bias-corrected methods are similar in terms of bias, efficiency, coverage probability, average coverage length, impact of misspecification of correlation structure, and impact of cluster size on bias correction. Both these methods show superior properties over the GEE estimates for small samples. Further, analysis of data involving a small number of subjects also shows improvement in bias, MSE, standard error, and length of the confidence interval of the estimates by the two bias adjusted methods over the GEE estimates. For small to moderate sample sizes (N ≤50), either of the bias-corrected methods GEEBc and GEEBr can be used. However, the method GEEBc should be preferred over GEEBr, as the former is computationally easier. For large sample sizes, the GEE method can be used. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Sampling scales define occupancy and underlying occupancy-abundance relationships in animals.

    Science.gov (United States)

    Steenweg, Robin; Hebblewhite, Mark; Whittington, Jesse; Lukacs, Paul; McKelvey, Kevin

    2018-01-01

    Occupancy-abundance (OA) relationships are a foundational ecological phenomenon and field of study, and occupancy models are increasingly used to track population trends and understand ecological interactions. However, these two fields of ecological inquiry remain largely isolated, despite growing appreciation of the importance of integration. For example, using occupancy models to infer trends in abundance is predicated on positive OA relationships. Many occupancy studies collect data that violate geographical closure assumptions due to the choice of sampling scales and application to mobile organisms, which may change how occupancy and abundance are related. Little research, however, has explored how different occupancy sampling designs affect OA relationships. We develop a conceptual framework for understanding how sampling scales affect the definition of occupancy for mobile organisms, which drives OA relationships. We explore how spatial and temporal sampling scales, and the choice of sampling unit (areal vs. point sampling), affect OA relationships. We develop predictions using simulations, and test them using empirical occupancy data from remote cameras on 11 medium-large mammals. Surprisingly, our simulations demonstrate that when using point sampling, OA relationships are unaffected by spatial sampling grain (i.e., cell size). In contrast, when using areal sampling (e.g., species atlas data), OA relationships are affected by spatial grain. Furthermore, OA relationships are also affected by temporal sampling scales, where the curvature of the OA relationship increases with temporal sampling duration. Our empirical results support these predictions, showing that at any given abundance, the spatial grain of point sampling does not affect occupancy estimates, but longer surveys do increase occupancy estimates. For rare species (low occupancy), estimates of occupancy will quickly increase with longer surveys, even while abundance remains constant. Our results

  16. Systematic random sampling of the comet assay.

    Science.gov (United States)

    McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan

    2009-07-01

    The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.

  17. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  18. Prediction of Complex Human Traits Using the Genomic Best Linear Unbiased Predictor

    DEFF Research Database (Denmark)

    de los Campos, Gustavo; Vazquez, Ana I; Fernando, Rohan

    2013-01-01

    Despite important advances from Genome Wide Association Studies (GWAS), for most complex human traits and diseases, a sizable proportion of genetic variance remains unexplained and prediction accuracy (PA) is usually low. Evidence suggests that PA can be improved using Whole-Genome Regression (WGR......) models where phenotypes are regressed on hundreds of thousands of variants simultaneously. The Genomic Best Linear Unbiased Prediction G-BLUP, a ridge-regression type method) is a commonly used WGR method and has shown good predictive performance when applied to plant and animal breeding populations....... However, breeding and human populations differ greatly in a number of factors that can affect the predictive performance of G-BLUP. Using theory, simulations, and real data analysis, we study the erformance of G-BLUP when applied to data from related and unrelated human subjects. Under perfect linkage...

  19. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  20. Unbiased determination of polarized parton distributions and their uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Ball, Richard D. [Tait Institute, University of Edinburgh, JCMB, KB, Mayfield Rd, Edinburgh EH9 3JZ, Scotland (United Kingdom); Forte, Stefano, E-mail: forte@mi.infn.it [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Guffanti, Alberto [The Niels Bohr International Academy and Discovery Center, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen (Denmark); Nocera, Emanuele R. [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ridolfi, Giovanni [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova, Genova (Italy); Rojo, Juan [PH Department, TH Unit, CERN, CH-1211 Geneva 23 (Switzerland)

    2013-09-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations.

  1. Unbiased determination of polarized parton distributions and their uncertainties

    International Nuclear Information System (INIS)

    Ball, Richard D.; Forte, Stefano; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan

    2013-01-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations

  2. AN UNBIASED 1.3 mm EMISSION LINE SURVEY OF THE PROTOPLANETARY DISK ORBITING LkCa 15

    Energy Technology Data Exchange (ETDEWEB)

    Punzi, K. M.; Kastner, J. H. [Center for Imaging Science, School of Physics and Astronomy, and Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623 (United States); Hily-Blant, P.; Forveille, T. [UJF—Grenoble 1/CNRS-INSU, Institut de Planétologie et d’Astrophysique de Grenoble (IPAG) UMR 5274, F-38041, Grenoble (France); Sacco, G. G. [INAF—Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125, Firenze (Italy)

    2015-06-01

    The outer (>30 AU) regions of the dusty circumstellar disk orbiting the ∼2–5 Myr old, actively accreting solar analog LkCa 15 are known to be chemically rich, and the inner disk may host a young protoplanet within its central cavity. To obtain a complete census of the brightest molecular line emission emanating from the LkCa 15 disk over the 210–270 GHz (1.4–1.1 mm) range, we have conducted an unbiased radio spectroscopic survey with the Institute de Radioastronomie Millimétrique (IRAM) 30 m telescope. The survey demonstrates that in this spectral region, the most readily detectable lines are those of CO and its isotopologues {sup 13}CO and C{sup 18}O, as well as HCO{sup +}, HCN, CN, C{sub 2}H, CS, and H{sub 2}CO. All of these species had been previously detected in the LkCa 15 disk; however, the present survey includes the first complete coverage of the CN (2–1) and C{sub 2}H (3–2) hyperfine complexes. Modeling of these emission complexes indicates that the CN and C{sub 2}H either reside in the coldest regions of the disk or are subthermally excited, and that their abundances are enhanced relative to molecular clouds and young stellar object environments. These results highlight the value of unbiased single-dish line surveys in guiding future high-resolution interferometric imaging of disks.

  3. Elemental mapping of large samples by external ion beam analysis with sub-millimeter resolution and its applications

    Science.gov (United States)

    Silva, T. F.; Rodrigues, C. L.; Added, N.; Rizzutto, M. A.; Tabacniks, M. H.; Mangiarotti, A.; Curado, J. F.; Aguirre, F. R.; Aguero, N. F.; Allegro, P. R. P.; Campos, P. H. O. V.; Restrepo, J. M.; Trindade, G. F.; Antonio, M. R.; Assis, R. F.; Leite, A. R.

    2018-05-01

    The elemental mapping of large areas using ion beam techniques is a desired capability for several scientific communities, involved on topics ranging from geoscience to cultural heritage. Usually, the constraints for large-area mapping are not met in setups employing micro- and nano-probes implemented all over the world. A novel setup for mapping large sized samples in an external beam was recently built at the University of São Paulo employing a broad MeV-proton probe with sub-millimeter dimension, coupled to a high-precision large range XYZ robotic stage (60 cm range in all axis and precision of 5 μ m ensured by optical sensors). An important issue on large area mapping is how to deal with the irregularities of the sample's surface, that may introduce artifacts in the images due to the variation of the measuring conditions. In our setup, we implemented an automatic system based on machine vision to correct the position of the sample to compensate for its surface irregularities. As an additional benefit, a 3D digital reconstruction of the scanned surface can also be obtained. Using this new and unique setup, we have produced large-area elemental maps of ceramics, stones, fossils, and other sort of samples.

  4. Rapid separation method for {sup 237}Np and Pu isotopes in large soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod L., E-mail: sherrod.maxwell@srs.go [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States); Culligan, Brian K.; Noyes, Gary W. [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States)

    2011-07-15

    A new rapid method for the determination of {sup 237}Np and Pu isotopes in soil and sediment samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used for large soil samples. The new soil method utilizes an acid leaching method, iron/titanium hydroxide precipitation, a lanthanum fluoride soil matrix removal step, and a rapid column separation process with TEVA Resin. The large soil matrix is removed easily and rapidly using these two simple precipitations with high chemical recoveries and effective removal of interferences. Vacuum box technology and rapid flow rates are used to reduce analytical time.

  5. Identification and characterization of Highlands J virus from a Mississippi sandhill crane using unbiased next-generation sequencing

    Science.gov (United States)

    Ip, Hon S.; Wiley, Michael R.; Long, Renee; Gustavo, Palacios; Shearn-Bochsler, Valerie; Whitehouse, Chris A.

    2014-01-01

    Advances in massively parallel DNA sequencing platforms, commonly termed next-generation sequencing (NGS) technologies, have greatly reduced time, labor, and cost associated with DNA sequencing. Thus, NGS has become a routine tool for new viral pathogen discovery and will likely become the standard for routine laboratory diagnostics of infectious diseases in the near future. This study demonstrated the application of NGS for the rapid identification and characterization of a virus isolated from the brain of an endangered Mississippi sandhill crane. This bird was part of a population restoration effort and was found in an emaciated state several days after Hurricane Isaac passed over the refuge in Mississippi in 2012. Post-mortem examination had identified trichostrongyliasis as the possible cause of death, but because a virus with morphology consistent with a togavirus was isolated from the brain of the bird, an arboviral etiology was strongly suspected. Because individual molecular assays for several known arboviruses were negative, unbiased NGS by Illumina MiSeq was used to definitively identify and characterize the causative viral agent. Whole genome sequencing and phylogenetic analysis revealed the viral isolate to be the Highlands J virus, a known avian pathogen. This study demonstrates the use of unbiased NGS for the rapid detection and characterization of an unidentified viral pathogen and the application of this technology to wildlife disease diagnostics and conservation medicine.

  6. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  7. Unbiased metabolite profiling by liquid chromatography-quadrupole time-of-flight mass spectrometry and multivariate data analysis for herbal authentication: classification of seven Lonicera species flower buds.

    Science.gov (United States)

    Gao, Wen; Yang, Hua; Qi, Lian-Wen; Liu, E-Hu; Ren, Mei-Ting; Yan, Yu-Ting; Chen, Jun; Li, Ping

    2012-07-06

    Plant-based medicines become increasingly popular over the world. Authentication of herbal raw materials is important to ensure their safety and efficacy. Some herbs belonging to closely related species but differing in medicinal properties are difficult to be identified because of similar morphological and microscopic characteristics. Chromatographic fingerprinting is an alternative method to distinguish them. Existing approaches do not allow a comprehensive analysis for herbal authentication. We have now developed a strategy consisting of (1) full metabolic profiling of herbal medicines by rapid resolution liquid chromatography (RRLC) combined with quadrupole time-of-flight mass spectrometry (QTOF MS), (2) global analysis of non-targeted compounds by molecular feature extraction algorithm, (3) multivariate statistical analysis for classification and prediction, and (4) marker compounds characterization. This approach has provided a fast and unbiased comparative multivariate analysis of the metabolite composition of 33-batch samples covering seven Lonicera species. Individual metabolic profiles are performed at the level of molecular fragments without prior structural assignment. In the entire set, the obtained classifier for seven Lonicera species flower buds showed good prediction performance and a total of 82 statistically different components were rapidly obtained by the strategy. The elemental compositions of discriminative metabolites were characterized by the accurate mass measurement of the pseudomolecular ions and their chemical types were assigned by the MS/MS spectra. The high-resolution, comprehensive and unbiased strategy for metabolite data analysis presented here is powerful and opens the new direction of authentication in herbal analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Study of a large rapid ashing apparatus and a rapid dry ashing method for biological samples and its application

    International Nuclear Information System (INIS)

    Jin Meisun; Wang Benli; Liu Wencang

    1988-04-01

    A large rapid-dry-ashing apparatus and a rapid ashing method for biological samples are described. The apparatus consists of specially made ashing furnace, gas supply system and temperature-programming control cabinet. The following adventages have been showed by ashing experiment with the above apparatus: (1) high speed of ashing and saving of electric energy; (2) The apparatus can ash a large amount of samples at a time; (3) The ashed sample is pure white (or spotless), loose and easily soluble with few content of residual char; (4) The fresh sample can also be ashed directly. The apparatus is suitable for ashing a large amount of the environmental samples containing low level radioactivity trace elements and the medical, food and agricultural research samples

  9. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  10. Development of digital gamma-activation autoradiography for analysis of samples of large area

    International Nuclear Information System (INIS)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I.

    2011-01-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  11. Development of digital gamma-activation autoradiography for analysis of samples of large area

    Energy Technology Data Exchange (ETDEWEB)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I. [Russian Academy of Sciences, Moscow (Russian Federation). Vernadsky Inst. of Geochemistry and Analytical Chemistry

    2011-07-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  12. THE OPTICALLY UNBIASED GRB HOST (TOUGH) SURVEY. III. REDSHIFT DISTRIBUTION

    Energy Technology Data Exchange (ETDEWEB)

    Jakobsson, P.; Chapman, R.; Vreeswijk, P. M. [Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, 107 Reykjavik (Iceland); Hjorth, J.; Malesani, D.; Fynbo, J. P. U.; Milvang-Jensen, B. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, 2100 Copenhagen (Denmark); Tanvir, N. R.; Starling, R. L. C. [Department of Physics and Astronomy, University of Leicester, University Road, Leicester LE1 7RH (United Kingdom); Letawe, G. [Departement d' Astrophysique, Geophysique et Oceanographie, ULg, Allee du 6 aout, 17-Bat. B5c B-4000 Liege (Sart-Tilman) (Belgium)

    2012-06-10

    We present 10 new gamma-ray burst (GRB) redshifts and another five redshift limits based on host galaxy spectroscopy obtained as part of a large program conducted at the Very Large Telescope (VLT). The redshifts span the range 0.345 {<=} z {approx}< 2.54. Three of our measurements revise incorrect values from the literature. The homogeneous host sample researched here consists of 69 hosts that originally had a redshift completeness of 55% (with 38 out of 69 hosts having redshifts considered secure). Our project, including VLT/X-shooter observations reported elsewhere, increases this fraction to 77% (53/69), making the survey the most comprehensive in terms of redshift completeness of any sample to the full Swift depth, analyzed to date. We present the cumulative redshift distribution and derive a conservative, yet small, associated uncertainty. We constrain the fraction of Swift GRBs at high redshift to a maximum of 14% (5%) for z > 6 (z > 7). The mean redshift of the host sample is assessed to be (z) {approx}> 2.2, with the 10 new redshifts reducing it significantly. Using this more complete sample, we confirm previous findings that the GRB rate at high redshift (z {approx}> 3) appears to be in excess of predictions based on assumptions that it should follow conventional determinations of the star formation history of the universe, combined with an estimate of its likely metallicity dependence. This suggests that either star formation at high redshifts has been significantly underestimated, for example, due to a dominant contribution from faint, undetected galaxies, or that GRB production is enhanced in the conditions of early star formation, beyond that usually ascribed to lower metallicity.

  13. Unbiased stereological estimation of d-dimensional volume in Rn from an isotropic random slice through a fixed point

    DEFF Research Database (Denmark)

    Jensen, Eva B. Vedel; Kiêu, K

    1994-01-01

    Unbiased stereological estimators of d-dimensional volume in R(n) are derived, based on information from an isotropic random r-slice through a specified point. The content of the slice can be subsampled by means of a spatial grid. The estimators depend only on spatial distances. As a fundamental ...... lemma, an explicit formula for the probability that an isotropic random r-slice in R(n) through 0 hits a fixed point in R(n) is given....

  14. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  15. Relationship of fish indices with sampling effort and land use change in a large Mediterranean river.

    Science.gov (United States)

    Almeida, David; Alcaraz-Hernández, Juan Diego; Merciai, Roberto; Benejam, Lluís; García-Berthou, Emili

    2017-12-15

    Fish are invaluable ecological indicators in freshwater ecosystems but have been less used for ecological assessments in large Mediterranean rivers. We evaluated the effects of sampling effort (transect length) on fish metrics, such as species richness and two fish indices (the new European Fish Index EFI+ and a regional index, IBICAT2b), in the mainstem of a large Mediterranean river. For this purpose, we sampled by boat electrofishing five sites each with 10 consecutive transects corresponding to a total length of 20 times the river width (European standard required by the Water Framework Directive) and we also analysed the effect of sampling area on previous surveys. Species accumulation curves and richness extrapolation estimates in general suggested that species richness was reasonably estimated with transect lengths of 10 times the river width or less. The EFI+ index was significantly affected by sampling area, both for our samplings and previous data. Surprisingly, EFI+ values in general decreased with increasing sampling area, despite the higher observed richness, likely because the expected values of metrics were higher. By contrast, the regional fish index was not dependent on sampling area, likely because it does not use a predictive model. Both fish indices, but particularly the EFI+, decreased with less forest cover percentage, even within the smaller disturbance gradient in the river type studied (mainstem of a large Mediterranean river, where environmental pressures are more general). Although the two fish-based indices are very different in terms of their development, methodology, and metrics used, they were significantly correlated and provided a similar assessment of ecological status. Our results reinforce the importance of standardization of sampling methods for bioassessment and suggest that predictive models that use sampling area as a predictor might be more affected by differences in sampling effort than simpler biotic indices. Copyright

  16. CONNECTING GRBs AND ULIRGs: A SENSITIVE, UNBIASED SURVEY FOR RADIO EMISSION FROM GAMMA-RAY BURST HOST GALAXIES AT 0 < z < 2.5

    Energy Technology Data Exchange (ETDEWEB)

    Perley, D. A. [Department of Astronomy, California Institute of Technology, MC 249-17, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Perley, R. A. [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States); Hjorth, J.; Malesani, D. [Dark Cosmology Centre, Niels Bohr Institute, DK-2100 Copenhagen (Denmark); Michałowski, M. J. [Scottish Universities Physics Alliance, Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, EH9 3HJ (United Kingdom); Cenko, S. B. [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Jakobsson, P. [Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, 107 Reykjavík (Iceland); Krühler, T. [European Southern Observatory, Alonso de Córdova 3107, Vitacura, Casilla 19001, Santiago 19 (Chile); Levan, A. J. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Tanvir, N. R., E-mail: dperley@astro.caltech.edu [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom)

    2015-03-10

    Luminous infrared galaxies and submillimeter galaxies contribute significantly to stellar mass assembly and provide an important test of the connection between the gamma-ray burst (GRB) rate and that of overall cosmic star formation. We present sensitive 3 GHz radio observations using the Karl G. Jansky Very Large Array of 32 uniformly selected GRB host galaxies spanning a redshift range from 0 < z < 2.5, providing the first fully dust- and sample-unbiased measurement of the fraction of GRBs originating from the universe's most bolometrically luminous galaxies. Four galaxies are detected, with inferred radio star formation rates (SFRs) ranging between 50 and 300 M {sub ☉} yr{sup –1}. Three of the four detections correspond to events consistent with being optically obscured 'dark' bursts. Our overall detection fraction implies that between 9% and 23% of GRBs between 0.5 < z < 2.5 occur in galaxies with S {sub 3GHz} > 10 μJy, corresponding to SFR > 50 M {sub ☉} yr{sup –1} at z ∼ 1 or >250 M {sub ☉} yr{sup –1} at z ∼ 2. Similar galaxies contribute approximately 10%-30% of all cosmic star formation, so our results are consistent with a GRB rate that is not strongly biased with respect to the total SFR of a galaxy. However, all four radio-detected hosts have stellar masses significantly lower than IR/submillimeter-selected field galaxies of similar luminosities. We suggest that the GRB rate may be suppressed in metal-rich environments but independently enhanced in intense starbursts, producing a strong efficiency dependence on mass but little net dependence on bulk galaxy SFR.

  17. SU{sub 2} nonstandard bases: the case of mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, Albouy; Kibler, Maurice R. [Universite de Lyon, Institut de Physique Nucleaire de Lyon, Universite Lyon, CNRS/IN2P3, 43 bd du 11 novembre 1918, F-69622 Villeurbanne Cedex (France)

    2007-02-15

    This paper deals with bases in a finite-dimensional Hilbert space. Such a space can be realized as a subspace of the representation space of SU{sub 2} corresponding to an irreducible representation of SU{sub 2}. The representation theory of SU{sub 2} is reconsidered via the use of two truncated deformed oscillators. This leads to replace the familiar scheme [j{sub 2}, j{sub z}] by a scheme [j{sup 2}, v{sub ra}], where the two-parameter operator v{sub ra} is defined in the universal enveloping algebra of the Lie algebra su{sub 2}. The eigenvectors of the commuting set of operators [j{sup 2}, v{sub ra}] are adapted to a tower of chains SO{sub 3} includes C{sub 2j+1} (2j belongs to N{sup *}), where C{sub 2j+1} is the cyclic group of order 2j + 1. In the case where 2j + 1 is prime, the corresponding eigenvectors generate a complete set of mutually unbiased bases. Some useful relations on generalized quadratic Gauss sums are exposed in three appendices. (authors)

  18. Virological Sampling of Inaccessible Wildlife with Drones.

    Science.gov (United States)

    Geoghegan, Jemma L; Pirotta, Vanessa; Harvey, Erin; Smith, Alastair; Buchmann, Jan P; Ostrowski, Martin; Eden, John-Sebastian; Harcourt, Robert; Holmes, Edward C

    2018-06-02

    There is growing interest in characterizing the viromes of diverse mammalian species, particularly in the context of disease emergence. However, little is known about virome diversity in aquatic mammals, in part due to difficulties in sampling. We characterized the virome of the exhaled breath (or blow) of the Eastern Australian humpback whale ( Megaptera novaeangliae ). To achieve an unbiased survey of virome diversity, a meta-transcriptomic analysis was performed on 19 pooled whale blow samples collected via a purpose-built Unmanned Aerial Vehicle (UAV, or drone) approximately 3 km off the coast of Sydney, Australia during the 2017 winter annual northward migration from Antarctica to northern Australia. To our knowledge, this is the first time that UAVs have been used to sample viruses. Despite the relatively small number of animals surveyed in this initial study, we identified six novel virus species from five viral families. This work demonstrates the potential of UAVs in studies of virus disease, diversity, and evolution.

  19. Surface reconstruction through poisson disk sampling.

    Directory of Open Access Journals (Sweden)

    Wenguang Hou

    Full Text Available This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective.

  20. Feasibility studies on large sample neutron activation analysis using a low power research reactor

    International Nuclear Information System (INIS)

    Gyampo, O.

    2008-06-01

    Instrumental neutron activation analysis (INAA) using Ghana Research Reactor-1 (GHARR-1) can be directly applied to samples with masses in grams. Samples weights were in the range of 0.5g to 5g. Therefore, the representativity of the sample is improved as well as sensitivity. Irradiation of samples was done using a low power research reactor. The correction for the neutron self-shielding within the sample is determined from measurement of the neutron flux depression just outside the sample. Correction for gamma ray self-attenuation in the sample was performed via linear attenuation coefficients derived from transmission measurements. Quantitative and qualitative analysis of data were done using gamma ray spectrometry (HPGe detector). The results of this study on the possibilities of large sample NAA using a miniature neutron source reactor (MNSR) show clearly that the Ghana Research Reactor-1 (GHARR-1) at the National Nuclear Research Institute (NNRI) can be used for sample analyses up to 5 grams (5g) using the pneumatic transfer systems.

  1. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  2. Determination of 129I in large soil samples after alkaline wet disintegration

    International Nuclear Information System (INIS)

    Bunzl, K.; Kracke, W.

    1992-01-01

    Large soil samples (up to 500 g) can conveniently be disintegrated by hydrogen peroxide in an utility tank under alkaline conditions to determine subsequently 129 I by neutron activation analysis. Interfering elements such as Br are removed already before neutron irradiation to reduce the radiation exposure of the personnel. The precision of the method is 129 I also by the combustion method. (orig.)

  3. Measurement of insulin-like growth factor-1 and insulin-like growth factor binding protein-3 after delayed separation of whole blood samples

    NARCIS (Netherlands)

    Hartog, Hermien; van der Graaf, Winette T. A.; Wesseling, Jelle; van der Veer, Eveline; Boezen, H. Marike

    Objectives: Epidemiological studies benefit from unbiased blood specimens collected with minimal cost and effort of blood collection and storage. We evaluated the stability of IGF-1 and IGFBP-3 in whole blood samples stored at room temperature to justify delays in blood processing. Design and

  4. Measurement of insulin-like growth factor-1 and insulin-like growth factor binding protein-3 after delayed separation of whole blood samples.

    NARCIS (Netherlands)

    Hartog, H.; Graaf, W.T.A. van der; Wesseling, J.; Veer, E. van der; Boezen, H.M.

    2008-01-01

    OBJECTIVES: Epidemiological studies benefit from unbiased blood specimens collected with minimal cost and effort of blood collection and storage. We evaluated the stability of IGF-1 and IGFBP-3 in whole blood samples stored at room temperature to justify delays in blood processing. DESIGN AND

  5. Determinants of salivary evening alpha-amylase in a large sample free of psychopathology

    NARCIS (Netherlands)

    Veen, Gerthe; Giltay, Erik J.; Vreeburg, Sophie A.; Licht, Carmilla M. M.; Cobbaert, Christa M.; Zitman, Frans G.; Penninx, Brenda W. J. H.

    Objective: Recently, salivary alpha-amylase (sAA) has been proposed as a suitable index for sympathetic activity and dysregulation of the autonomic nervous system (ANS). Although determinants of sAA have been described, they have not been studied within the same study with a large sample size

  6. Water pollution screening by large-volume injection of aqueous samples and application to GC/MS analysis of a river Elbe sample

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, S.; Efer, J.; Engewald, W. [Leipzig Univ. (Germany). Inst. fuer Analytische Chemie

    1997-03-01

    The large-volume sampling of aqueous samples in a programmed temperature vaporizer (PTV) injector was used successfully for the target and non-target analysis of real samples. In this still rarely applied method, e.g., 1 mL of the water sample to be analyzed is slowly injected direct into the PTV. The vaporized water is eliminated through the split vent. The analytes are concentrated onto an adsorbent inside the insert and subsequently thermally desorbed. The capability of the method is demonstrated using a sample from the river Elbe. By means of coupling this method with a mass selective detector in SIM mode (target analysis) the method allows the determination of pollutants in the concentration range up to 0.01 {mu}g/L. Furthermore, PTV enrichment is an effective and time-saving method for non-target analysis in SCAN mode. In a sample from the river Elbe over 20 compounds were identified. (orig.) With 3 figs., 2 tabs.

  7. A hard-to-read font reduces the framing effect in a large sample.

    Science.gov (United States)

    Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik

    2018-04-01

    How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.

  8. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  9. Association between genetic variation in a region on chromosome 11 and schizophrenia in large samples from Europe

    DEFF Research Database (Denmark)

    Rietschel, M; Mattheisen, M; Degenhardt, F

    2012-01-01

    the recruitment of very large samples of patients and controls (that is tens of thousands), or large, potentially more homogeneous samples that have been recruited from confined geographical areas using identical diagnostic criteria. Applying the latter strategy, we performed a genome-wide association study (GWAS...... between emotion regulation and cognition that is structurally and functionally abnormal in SCZ and bipolar disorder.Molecular Psychiatry advance online publication, 12 July 2011; doi:10.1038/mp.2011.80....

  10. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  11. Unbiased methods for removing systematics from galaxy clustering measurements

    Science.gov (United States)

    Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.

    2016-02-01

    Measuring the angular clustering of galaxies as a function of redshift is a powerful method for extracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterize and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to the basic mode projection algorithm, we prove it to be free of any bias, whereas we conclude that results computed with extended mode projection are biased. Within a simplified setup, we derive analytical expressions for the bias and discuss the options for correcting it in more realistic configurations. Common to all three methods is an increased estimator variance induced by the cleaning process, albeit at different levels. These results enable unbiased high-precision clustering measurements in the presence of spatially varying systematics, an essential step towards realizing the full potential of current and planned galaxy surveys.

  12. Random Tagging Genotyping by Sequencing (rtGBS, an Unbiased Approach to Locate Restriction Enzyme Sites across the Target Genome.

    Directory of Open Access Journals (Sweden)

    Elena Hilario

    Full Text Available Genotyping by sequencing (GBS is a restriction enzyme based targeted approach developed to reduce the genome complexity and discover genetic markers when a priori sequence information is unavailable. Sufficient coverage at each locus is essential to distinguish heterozygous from homozygous sites accurately. The number of GBS samples able to be pooled in one sequencing lane is limited by the number of restriction sites present in the genome and the read depth required at each site per sample for accurate calling of single-nucleotide polymorphisms. Loci bias was observed using a slight modification of the Elshire et al.some restriction enzyme sites were represented in higher proportions while others were poorly represented or absent. This bias could be due to the quality of genomic DNA, the endonuclease and ligase reaction efficiency, the distance between restriction sites, the preferential amplification of small library restriction fragments, or bias towards cluster formation of small amplicons during the sequencing process. To overcome these issues, we have developed a GBS method based on randomly tagging genomic DNA (rtGBS. By randomly landing on the genome, we can, with less bias, find restriction sites that are far apart, and undetected by the standard GBS (stdGBS method. The study comprises two types of biological replicates: six different kiwifruit plants and two independent DNA extractions per plant; and three types of technical replicates: four samples of each DNA extraction, stdGBS vs. rtGBS methods, and two independent library amplifications, each sequenced in separate lanes. A statistically significant unbiased distribution of restriction fragment size by rtGBS showed that this method targeted 49% (39,145 of BamH I sites shared with the reference genome, compared to only 14% (11,513 by stdGBS.

  13. Psychometric Evaluation of the Thought–Action Fusion Scale in a Large Clinical Sample

    Science.gov (United States)

    Meyer, Joseph F.; Brown, Timothy A.

    2015-01-01

    This study examined the psychometric properties of the 19-item Thought–Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct. PMID:22315482

  14. Psychometric evaluation of the thought-action fusion scale in a large clinical sample.

    Science.gov (United States)

    Meyer, Joseph F; Brown, Timothy A

    2013-12-01

    This study examined the psychometric properties of the 19-item Thought-Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct.

  15. Measuring Radionuclides in the environment: radiological quantities and sampling designs

    International Nuclear Information System (INIS)

    Voigt, G.

    1998-10-01

    One aim of the workshop was to support and provide an ICRU report committee (International Union of Radiation Units) with actual information on techniques, data and knowledge of modern radioecology when radionuclides are to be measured in the environment. It has been increasingly recognised that some studies in radioecology, especially those involving both field sampling and laboratory measurements, have not paid adequate attention to the problem of obtaining representative, unbiased samples. This can greatly affect the quality of scientific interpretation, and the ability to manage the environment. Further, as the discipline of radioecology has developed, it has seen a growth in the numbers of quantities and units used, some of which are ill-defined and which are non-standardised. (orig.)

  16. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  17. Investigating sex differences in psychological predictors of snack intake among a large representative sample

    NARCIS (Netherlands)

    Adriaanse, M.A.; Evers, C.; Verhoeven, A.A.C.; de Ridder, D.T.D.

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of

  18. Obscured AGN at z similar to 1 from the zCOSMOS-Bright Survey : I. Selection and optical properties of a [Ne v]-selected sample

    NARCIS (Netherlands)

    Mignoli, M.; Vignali, C.; Gilli, R.; Comastri, A.; Zamorani, G.; Bolzonella, M.; Bongiorno, A.; Lamareille, F.; Nair, P.; Pozzetti, L.; Lilly, S. J.; Carollo, C. M.; Contini, T.; Kneib, J. -P.; Le Fevre, O.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Caputi, K.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovac, K.; Le Borgne, J. -F.; Le Brun, V.; Maier, C.; Pello, R.; Peng, Y.; Montero, E. Perez; Presotto, V.; Silverman, J. D.; Tanaka, M.; Tasca, L.; Tresse, L.; Vergani, D.; Zucca, E.; Bordoloi, R.; Cappi, A.; Cimatti, A.; Koekemoer, A. M.; McCracken, H. J.; Moresco, M.; Welikala, N.

    Aims. The application of multi-wavelength selection techniques is essential for obtaining a complete and unbiased census of active galactic nuclei (AGN). We present here a method for selecting z similar to 1 obscured AGN from optical spectroscopic surveys. Methods. A sample of 94 narrow-line AGN

  19. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Sample preparation and analysis of large 238PuO2 and ThO2 spheres

    International Nuclear Information System (INIS)

    Wise, R.L.; Selle, J.E.

    1975-01-01

    A program was initiated to determine the density gradient across a large spherical 238 PuO 2 sample produced by vacuum hot pressing. Due to the high thermal output of the ceramic a thin section was necessary to prevent overheating of the plastic mount. Techniques were developed for cross sectioning, mounting, grinding, and polishing of the sample. The polished samples were then analyzed on a quantitative image analyzer to determine the density as a function of location across the sphere. The techniques for indexing, analyzing, and reducing the data are described. Typical results obtained on a ThO 2 simulant sphere are given

  1. Application of Conventional and K0-Based Internal Monostandard NAA Using Reactor Neutrons for Compositional Analysis of Large Samples

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.; Swain, K. K.; Pujari, P.K.

    2018-01-01

    Large sample neutron activation analysis (LSNAA) work was carried out for samples of coal, uranium ore, stainless steel, ancient and new clay potteries, dross and clay pottery replica from Peru using low flux high thermalized irradiation sites. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated using thermal column (TC) facility of Apsara reactor as well as graphite reflector position of critical facility (CF) at Bhabha Atomic Research Centre, Mumbai. Small size (10 - 500 mg) samples were also irradiated at core position of Apsara reactor, pneumatic carrier facility (PCF) of Dhruva reactor and pneumatic fast transfer facility (PFTS) of KAMINI reactor. Irradiation positions were characterized using indium flux monitor for TC and CF whereas multi monitors were used at other positions. Radioactive assay was carried out using high resolution gamma ray spectrometry. The k0-based internal monostandard NAA (IM-NAA) method was used to determine elemental concentration ratios with respect to Na in coal and uranium ore samples, Sc in pottery samples and Fe in stainless steel. Insitu relative detection efficiency for each irradiated sample was obtained using γ rays of activation products in the required energy range. Representative sample sizes were arrived at for coal and uranium ore from the plots of La/Na ratios as a function of the mass of the sample. For stainless steel sample of SS 304L, the absolute concentrations were calculated from concentration ratios by mass balance approach since all the major elements (Fe, Cr, Ni and Mn) were amenable to NAA. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. The La to Ce concentration ratios were used for preliminary grouping and concentration ratios of 15 elements with respect to Sc were used by statistical cluster analysis for confirmation of grouping. Concentrations of Au and Ag were determined in not so

  2. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer's-Associated Aβ Oligomers.

    Directory of Open Access Journals (Sweden)

    Kyle C Wilcox

    Full Text Available Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs. AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs. This method gives a soluble membrane protein library (SMPL--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can

  3. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer's-Associated Aβ Oligomers.

    Science.gov (United States)

    Wilcox, Kyle C; Marunde, Matthew R; Das, Aditi; Velasco, Pauline T; Kuhns, Benjamin D; Marty, Michael T; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G; Klein, William L

    2015-01-01

    Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug discovery

  4. Large biases in regression-based constituent flux estimates: causes and diagnostic tools

    Science.gov (United States)

    Hirsch, Robert M.

    2014-01-01

    It has been documented in the literature that, in some cases, widely used regression-based models can produce severely biased estimates of long-term mean river fluxes of various constituents. These models, estimated using sample values of concentration, discharge, and date, are used to compute estimated fluxes for a multiyear period at a daily time step. This study compares results of the LOADEST seven-parameter model, LOADEST five-parameter model, and the Weighted Regressions on Time, Discharge, and Season (WRTDS) model using subsampling of six very large datasets to better understand this bias problem. This analysis considers sample datasets for dissolved nitrate and total phosphorus. The results show that LOADEST-7 and LOADEST-5, although they often produce very nearly unbiased results, can produce highly biased results. This study identifies three conditions that can give rise to these severe biases: (1) lack of fit of the log of concentration vs. log discharge relationship, (2) substantial differences in the shape of this relationship across seasons, and (3) severely heteroscedastic residuals. The WRTDS model is more resistant to the bias problem than the LOADEST models but is not immune to them. Understanding the causes of the bias problem is crucial to selecting an appropriate method for flux computations. Diagnostic tools for identifying the potential for bias problems are introduced, and strategies for resolving bias problems are described.

  5. Empirically sampling Universal Dependencies

    DEFF Research Database (Denmark)

    Schluter, Natalie; Agic, Zeljko

    2017-01-01

    Universal Dependencies incur a high cost in computation for unbiased system development. We propose a 100% empirically chosen small subset of UD languages for efficient parsing system development. The technique used is based on measurements of model capacity globally. We show that the diversity o...

  6. Unbiased, complete solar charging of a neutral flow battery by a single Si photocathode

    DEFF Research Database (Denmark)

    Wedege, Kristina; Bae, Dowon; Dražević, Emil

    2018-01-01

    Solar redox flow batteries have attracted attention as a possible integrated technology for simultaneous conversion and storage of solar energy. In this work, we review current efforts to design aqueous solar flow batteries in terms of battery electrolyte capacity, solar conversion efficiency...... and depth of solar charge. From a materials cost and design perspective, a simple, cost-efficient, aqueous solar redox flow battery will most likely incorporate only one semiconductor, and we demonstrate here a system where a single photocathode is accurately matched to the redox couples to allow...... for a complete solar charge. The single TiO2 protected Si photocathode with a catalytic Pt layer can fully solar charge a neutral TEMPO-sulfate/ferricyanide battery with a cell voltage of 0.35 V. An unbiased solar conversion efficiency of 1.6% is obtained and this system represents a new strategy in solar RFBs...

  7. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    Science.gov (United States)

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  8. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-01

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.

  9. Proteomics pipeline for biomarker discovery of laser capture microdissected breast cancer tissue

    NARCIS (Netherlands)

    N.Q. Liu (Ning Qing); R.B.H. Braakman (René); C. Stingl (Christoph); T.M. Luider (Theo); J.W.M. Martens (John); J.A. Foekens (John); A. Umar (Arzu)

    2012-01-01

    textabstractMass spectrometry (MS)-based label-free proteomics offers an unbiased approach to screen biomarkers related to disease progression and therapy-resistance of breast cancer on the global scale. However, multi-step sample preparation can introduce large variation in generated data, while

  10. Cosmological implications of a large complete quasar sample.

    Science.gov (United States)

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  11. A study of diabetes mellitus within a large sample of Australian twins

    DEFF Research Database (Denmark)

    Condon, Julianne; Shaw, Joanne E; Luciano, Michelle

    2008-01-01

    with type 2 diabetes (T2D), 41 female pairs with gestational diabetes (GD), 5 pairs with impaired glucose tolerance (IGT) and one pair with MODY. Heritabilities of T1D, T2D and GD were all high, but our samples did not have the power to detect effects of shared environment unless they were very large......Twin studies of diabetes mellitus can help elucidate genetic and environmental factors in etiology and can provide valuable biological samples for testing functional hypotheses, for example using expression and methylation studies of discordant pairs. We searched the volunteer Australian Twin...... Registry (19,387 pairs) for twins with diabetes using disease checklists from nine different surveys conducted from 1980-2000. After follow-up questionnaires to the twins and their doctors to confirm diagnoses, we eventually identified 46 pairs where one or both had type 1 diabetes (T1D), 113 pairs...

  12. Is Business Failure Due to Lack of Effort? Empirical Evidence from a Large Administrative Sample

    NARCIS (Netherlands)

    Ejrnaes, M.; Hochguertel, S.

    2013-01-01

    Does insurance provision reduce entrepreneurs' effort to avoid business failure? We exploit unique features of the voluntary Danish unemployment insurance (UI) scheme, that is available to the self-employed. Using a large sample of self-employed individuals, we estimate the causal effect of

  13. In-situ high resolution particle sampling by large time sequence inertial spectrometry

    International Nuclear Information System (INIS)

    Prodi, V.; Belosi, F.

    1990-09-01

    In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)

  14. High Efficient THz Emission From Unbiased and Biased Semiconductor Nanowires Fabricated Using Electron Beam Lithography

    Energy Technology Data Exchange (ETDEWEB)

    Balci, Soner; Czaplewski, David A.; Jung, Il Woong; Kim, Ju-Hyung; Hatami, Fariba; Kung, Patrick; Kim, Seongsin Margaret

    2017-07-01

    Besides having perfect control on structural features, such as vertical alignment and uniform distribution by fabricating the wires via e-beam lithography and etching process, we also investigated the THz emission from these fabricated nanowires when they are applied DC bias voltage. To be able to apply a voltage bias, an interdigitated gold (Au) electrode was patterned on the high-quality InGaAs epilayer grown on InP substrate bymolecular beam epitaxy. Afterwards, perfect vertically aligned and uniformly distributed nanowires were fabricated in between the electrodes of this interdigitated pattern so that we could apply voltage bias to improve the THz emission. As a result, we achieved enhancement in the emitted THz radiation by ~four times, about 12 dB increase in power ratio at 0.25 THz with a DC biased electric field compared with unbiased NWs.

  15. Field sampling, preparation procedure and plutonium analyses of large freshwater samples

    International Nuclear Information System (INIS)

    Straelberg, E.; Bjerk, T.O.; Oestmo, K.; Brittain, J.E.

    2002-01-01

    This work is part of an investigation of the mobility of plutonium in freshwater systems containing humic substances. A well-defined bog-stream system located in the catchment area of a subalpine lake, Oevre Heimdalsvatn, Norway, is being studied. During the summer of 1999, six water samples were collected from the tributary stream Lektorbekken and the lake itself. However, the analyses showed that the plutonium concentration was below the detection limit in all the samples. Therefore renewed sampling at the same sites was carried out in August 2000. The results so far are in agreement with previous analyses from the Heimdalen area. However, 100 times higher concentrations are found in the lowlands in the eastern part of Norway. The reason for this is not understood, but may be caused by differences in the concentrations of humic substances and/or the fact that the mountain areas are covered with snow for a longer period of time every year. (LN)

  16. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  17. Gene coexpression measures in large heterogeneous samples using count statistics.

    Science.gov (United States)

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  18. SURVIVAL ANALYSIS AND LENGTH-BIASED SAMPLING

    Directory of Open Access Journals (Sweden)

    Masoud Asgharian

    2010-12-01

    Full Text Available When survival data are colleted as part of a prevalent cohort study, the recruited cases have already experienced their initiating event. These prevalent cases are then followed for a fixed period of time at the end of which the subjects will either have failed or have been censored. When interests lies in estimating the survival distribution, from onset, of subjects with the disease, one must take into account that the survival times of the cases in a prevalent cohort study are left truncated. When it is possible to assume that there has not been any epidemic of the disease over the past period of time that covers the onset times of the subjects, one may assume that the underlying incidence process that generates the initiating event times is a stationary Poisson process. Under such assumption, the survival times of the recruited subjects are called “lengthbiased”. I discuss the challenges one is faced with in analyzing these type of data. To address the theoretical aspects of the work, I present asymptotic results for the NPMLE of the length-biased as well as the unbiased survival distribution. I also discuss estimating the unbiased survival function using only the follow-up time. This addresses the case that the onset times are either unknown or known with uncertainty. Some of our most recent work and open questions will be presented. These include some aspects of analysis of covariates, strong approximation, functional LIL and density estimation under length-biased sampling with right censoring. The results will be illustrated with survival data from patients with dementia, collected as part of the Canadian Study of Health and Aging (CSHA.

  19. Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging

    Science.gov (United States)

    Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.

    2017-08-01

    Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.

  20. Mid-IR Properties of an Unbiased AGN Sample of the Local Universe. 1; Emission-Line Diagnostics

    Science.gov (United States)

    Weaver, K. A.; Melendez, M.; Muhotzky, R. F.; Kraemer, S.; Engle, K.; Malumuth. E.; Tueller, J.; Markwardt, C.; Berghea, C. T.; Dudik, R. P.; hide

    2010-01-01

    \\Ve compare mid-IR emission-lines properties, from high-resolution Spitzer IRS spectra of a statistically-complete hard X-ray (14-195 keV) selected sample of nearby (z < 0.05) AGN detected by the Burst Alert Telescope (BAT) aboard Swift. The luminosity distribution for the mid-infrared emission-lines, [O IV] 25.89 microns, [Ne II] 12.81 microns, [Ne III] 15.56 microns and [Ne V] 14.32 microns, and hard X-ray continuum show no differences between Seyfert 1 and Seyfert 2 populations, although six newly discovered BAT AGNs are shown to be under-luminous in [O IV], most likely the result of dust extinction in the host galaxy. The overall tightness of the mid-infrared correlations and BAT luminosities suggests that the emission lines primarily arise in gas ionized by the AGN. We also compared the mid-IR emission-lines in the BAT AGNs with those from published studies of star-forming galaxies and LINERs. We found that the BAT AGN fall into a distinctive region when comparing the [Ne III]/[Ne II] and the [O IV]/[Ne III] quantities. From this we found that sources that have been previously classified in the mid-infrared/optical as AGN have smaller emission line ratios than those found for the BAT AGNs, suggesting that, in our X-ray selected sample, the AGN represents the main contribution to the observed line emission. Overall, we present a different set of emission line diagnostics to distinguish between AGN and star forming galaxies that can be used as a tool to find new AGN.

  1. Waardenburg syndrome: Novel mutations in a large Brazilian sample.

    Science.gov (United States)

    Bocángel, Magnolia Astrid Pretell; Melo, Uirá Souto; Alves, Leandro Ucela; Pardono, Eliete; Lourenço, Naila Cristina Vilaça; Marcolino, Humberto Vicente Cezar; Otto, Paulo Alberto; Mingroni-Netto, Regina Célia

    2018-06-01

    This paper deals with the molecular investigation of Waardenburg syndrome (WS) in a sample of 49 clinically diagnosed probands (most from southeastern Brazil), 24 of them having the type 1 (WS1) variant (10 familial and 14 isolated cases) and 25 being affected by the type 2 (WS2) variant (five familial and 20 isolated cases). Sequential Sanger sequencing of all coding exons of PAX3, MITF, EDN3, EDNRB, SOX10 and SNAI2 genes, followed by CNV detection by MLPA of PAX3, MITF and SOX10 genes in selected cases revealed many novel pathogenic variants. Molecular screening, performed in all patients, revealed 19 causative variants (19/49 = 38.8%), six of them being large whole-exon deletions detected by MLPA, seven (four missense and three nonsense substitutions) resulting from single nucleotide substitutions (SNV), and six representing small indels. A pair of dizygotic affected female twins presented the c.430delC variant in SOX10, but the mutation, imputed to gonadal mosaicism, was not found in their unaffected parents. At least 10 novel causative mutations, described in this paper, were found in this Brazilian sample. Copy-number-variation detected by MLPA identified the causative mutation in 12.2% of our cases, corresponding to 31.6% of all causative mutations. In the majority of cases, the deletions were sporadic, since they were not present in the parents of isolated cases. Our results, as a whole, reinforce the fact that the screening of copy-number-variants by MLPA is a powerful tool to identify the molecular cause in WS patients. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  2. IFCC guideline for sampling, measuring and reporting ionized magnesium in plasma

    DEFF Research Database (Denmark)

    Rayana, M.C. Ben; Burnett, R.W.; Covington, A.K.

    2008-01-01

    Analyzers with ion-selective electrodes (ISEs) for ionized magnesium (iMg) should yield comparable and unbiased results for iMg. This IFCC guideline on sampling, measuring and reporting iMg in plasma provides a prerequisite to achieve this goal [in this document, "plasma" refers to circulating...... plasma and the forms in which it is sampled, namely the plasma phase of anticoagulated whole blood (or "blood"), plasma separated from blood cells, or serum]. The guideline recommends measuring and reporting ionized magnesium as a substance concentration relative to the substance concentration...... of magnesium in primary aqueous calibrants with magnesium, sodium, and calcium chloride of physiological ionic strength. The recommended name is "the concentration of ionized magnesium in plasma". Based on this guideline, results will be approximately 3% higher than the true substance concentration and 4...

  3. Tracing the trajectory of skill learning with a very large sample of online game players.

    Science.gov (United States)

    Stafford, Tom; Dewar, Michael

    2014-02-01

    In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.

  4. An efficient and robust algorithm for parallel groupwise registration of bone surfaces

    NARCIS (Netherlands)

    van de Giessen, Martijn; Vos, Frans M.; Grimbergen, Cornelis A.; van Vliet, Lucas J.; Streekstra, Geert J.

    2012-01-01

    In this paper a novel groupwise registration algorithm is proposed for the unbiased registration of a large number of densely sampled point clouds. The method fits an evolving mean shape to each of the example point clouds thereby minimizing the total deformation. The registration algorithm

  5. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  6. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer’s-Associated Aβ Oligomers

    Science.gov (United States)

    Wilcox, Kyle C.; Marunde, Matthew R.; Das, Aditi; Velasco, Pauline T.; Kuhns, Benjamin D.; Marty, Michael T.; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G.; Klein, William L.

    2015-01-01

    Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer’s dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)—a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer’s model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug

  7. The Brief Negative Symptom Scale (BNSS): Independent validation in a large sample of Italian patients with schizophrenia.

    Science.gov (United States)

    Mucci, A; Galderisi, S; Merlotti, E; Rossi, A; Rocca, P; Bucci, P; Piegari, G; Chieffi, M; Vignapiano, A; Maj, M

    2015-07-01

    The Brief Negative Symptom Scale (BNSS) was developed to address the main limitations of the existing scales for the assessment of negative symptoms of schizophrenia. The initial validation of the scale by the group involved in its development demonstrated good convergent and discriminant validity, and a factor structure confirming the two domains of negative symptoms (reduced emotional/verbal expression and anhedonia/asociality/avolition). However, only relatively small samples of patients with schizophrenia were investigated. Further independent validation in large clinical samples might be instrumental to the broad diffusion of the scale in clinical research. The present study aimed to examine the BNSS inter-rater reliability, convergent/discriminant validity and factor structure in a large Italian sample of outpatients with schizophrenia. Our results confirmed the excellent inter-rater reliability of the BNSS (the intraclass correlation coefficient ranged from 0.81 to 0.98 for individual items and was 0.98 for the total score). The convergent validity measures had r values from 0.62 to 0.77, while the divergent validity measures had r values from 0.20 to 0.28 in the main sample (n=912) and in a subsample without clinically significant levels of depression and extrapyramidal symptoms (n=496). The BNSS factor structure was supported in both groups. The study confirms that the BNSS is a promising measure for quantifying negative symptoms of schizophrenia in large multicenter clinical studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  8. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  9. Large-volume injection of sample diluents not miscible with the mobile phase as an alternative approach in sample preparation for bioanalysis: an application for fenspiride bioequivalence.

    Science.gov (United States)

    Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor

    2011-09-01

    Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.

  10. Psychometric Properties of the Penn State Worry Questionnaire for Children in a Large Clinical Sample

    Science.gov (United States)

    Pestle, Sarah L.; Chorpita, Bruce F.; Schiffman, Jason

    2008-01-01

    The Penn State Worry Questionnaire for Children (PSWQ-C; Chorpita, Tracey, Brown, Collica, & Barlow, 1997) is a 14-item self-report measure of worry in children and adolescents. Although the PSWQ-C has demonstrated favorable psychometric properties in small clinical and large community samples, this study represents the first psychometric…

  11. Analysis of reflection-peak wavelengths of sampled fiber Bragg gratings with large chirp.

    Science.gov (United States)

    Zou, Xihua; Pan, Wei; Luo, Bin

    2008-09-10

    The reflection-peak wavelengths (RPWs) in the spectra of sampled fiber Bragg gratings with large chirp (SFBGs-LC) are theoretically investigated. Such RPWs are divided into two parts, the RPWs of equivalent uniform SFBGs (U-SFBGs) and the wavelength shift caused by the large chirp in the grating period (CGP). We propose a quasi-equivalent transform to deal with the CGP. That is, the CGP is transferred into quasi-equivalent phase shifts to directly derive the Fourier transform of the refractive index modulation. Then, in the case of both the direct and the inverse Talbot effect, the wavelength shift is obtained from the Fourier transform. Finally, the RPWs of SFBGs-LC can be achieved by combining the wavelength shift and the RPWs of equivalent U-SFBGs. Several simulations are shown to numerically confirm these predicted RPWs of SFBGs-LC.

  12. Superwind Outflows in Seyfert Galaxies? : Large-Scale Radio Maps of an Edge-On Sample

    Science.gov (United States)

    Colbert, E.; Gallimore, J.; Baum, S.; O'Dea, C.

    1995-03-01

    Large-scale galactic winds (superwinds) are commonly found flowing out of the nuclear region of ultraluminous infrared and powerful starburst galaxies. Stellar winds and supernovae from the nuclear starburst provide the energy to drive these superwinds. The outflowing gas escapes along the rotation axis, sweeping up and shock-heating clouds in the halo, which produces optical line emission, radio synchrotron emission, and X-rays. These features can most easily be studied in edge-on systems, so that the wind emission is not confused by that from the disk. We have begun a systematic search for superwind outflows in Seyfert galaxies. In an earlier optical emission-line survey, we found extended minor axis emission and/or double-peaked emission line profiles in >~30% of the sample objects. We present here large-scale (6cm VLA C-config) radio maps of 11 edge-on Seyfert galaxies, selected (without bias) from a distance-limited sample of 23 edge-on Seyferts. These data have been used to estimate the frequency of occurrence of superwinds. Preliminary results indicate that four (36%) of the 11 objects observed and six (26%) of the 23 objects in the distance-limited sample have extended radio emission oriented perpendicular to the galaxy disk. This emission may be produced by a galactic wind blowing out of the disk. Two (NGC 2992 and NGC 5506) of the nine objects for which we have both radio and optical data show good evidence for a galactic wind in both datasets. We suggest that galactic winds occur in >~30% of all Seyferts. A goal of this work is to find a diagnostic that can be used to distinguish between large-scale outflows that are driven by starbursts and those that are driven by an AGN. The presence of starburst-driven superwinds in Seyferts, if established, would have important implications for the connection between starburst galaxies and AGN.

  13. A COMPLETE SAMPLE OF BRIGHT SWIFT LONG GAMMA-RAY BURSTS. I. SAMPLE PRESENTATION, LUMINOSITY FUNCTION AND EVOLUTION

    International Nuclear Information System (INIS)

    Salvaterra, R.; Campana, S.; Vergani, S. D.; Covino, S.; D'Avanzo, P.; Fugazza, D.; Ghirlanda, G.; Ghisellini, G.; Melandri, A.; Sbarufatti, B.; Tagliaferri, G.; Nava, L.; Flores, H.; Piranomonte, S.

    2012-01-01

    We present a carefully selected sub-sample of Swift long gamma-ray bursts (GRBs) that is complete in redshift. The sample is constructed by considering only bursts with favorable observing conditions for ground-based follow-up searches, which are bright in the 15-150 keV Swift/BAT band, i.e., with 1-s peak photon fluxes in excess to 2.6 photons s –1 cm –2 . The sample is composed of 58 bursts, 52 of them with redshift for a completeness level of 90%, while another two have a redshift constraint, reaching a completeness level of 95%. For only three bursts we have no constraint on the redshift. The high level of redshift completeness allows us for the first time to constrain the GRB luminosity function and its evolution with cosmic times in an unbiased way. We find that strong evolution in luminosity (δ l = 2.3 ± 0.6) or in density (δ d = 1.7 ± 0.5) is required in order to account for the observations. The derived redshift distributions in the two scenarios are consistent with each other, in spite of their different intrinsic redshift distributions. This calls for other indicators to distinguish among different evolution models. Complete samples are at the base of any population studies. In future works we will use this unique sample of Swift bright GRBs to study the properties of the population of long GRBs.

  14. A COMPLETE SAMPLE OF BRIGHT SWIFT LONG GAMMA-RAY BURSTS. I. SAMPLE PRESENTATION, LUMINOSITY FUNCTION AND EVOLUTION

    Energy Technology Data Exchange (ETDEWEB)

    Salvaterra, R. [INAF, IASF Milano, via E. Bassini 15, I-20133 Milano (Italy); Campana, S.; Vergani, S. D.; Covino, S.; D' Avanzo, P.; Fugazza, D.; Ghirlanda, G.; Ghisellini, G.; Melandri, A.; Sbarufatti, B.; Tagliaferri, G. [INAF, Osservatorio Astronomico di Brera, via E. Bianchi 46, I-23807 Merate (Saint Lucia) (Italy); Nava, L. [SISSA, via Bonomea 265, I-34136 Trieste (Italy); Flores, H. [Laboratoire GEPI, Observatoire de Paris, CNRS-UMR8111, Univ. Paris-Diderot 5 place Jules Janssen, 92195 Meudon (France); Piranomonte, S., E-mail: ruben@lambrate.inaf.it [INAF, Osservatorio Astronomico di Roma, via Frascati 33, 00040 Monte Porzio Catone, Rome (Italy)

    2012-04-10

    We present a carefully selected sub-sample of Swift long gamma-ray bursts (GRBs) that is complete in redshift. The sample is constructed by considering only bursts with favorable observing conditions for ground-based follow-up searches, which are bright in the 15-150 keV Swift/BAT band, i.e., with 1-s peak photon fluxes in excess to 2.6 photons s{sup -1} cm{sup -2}. The sample is composed of 58 bursts, 52 of them with redshift for a completeness level of 90%, while another two have a redshift constraint, reaching a completeness level of 95%. For only three bursts we have no constraint on the redshift. The high level of redshift completeness allows us for the first time to constrain the GRB luminosity function and its evolution with cosmic times in an unbiased way. We find that strong evolution in luminosity ({delta}{sub l} = 2.3 {+-} 0.6) or in density ({delta}{sub d} = 1.7 {+-} 0.5) is required in order to account for the observations. The derived redshift distributions in the two scenarios are consistent with each other, in spite of their different intrinsic redshift distributions. This calls for other indicators to distinguish among different evolution models. Complete samples are at the base of any population studies. In future works we will use this unique sample of Swift bright GRBs to study the properties of the population of long GRBs.

  15. Estimation of river and stream temperature trends under haphazard sampling

    Science.gov (United States)

    Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao

    2015-01-01

    Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.

  16. Application of k0-based internal monostandard NAA for large sample analysis of clay pottery. As a part of inter comparison exercise

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2014-01-01

    As a part of inter comparison exercise of an IAEA Coordinated Research Project on large sample neutron activation analysis, a large size and non standard geometry size pottery replica (obtained from Peru) was analyzed by k 0 -based internal monostandard neutron activation analysis (IM-NAA). Two large size sub samples (0.40 and 0.25 kg) were irradiated at graphite reflector position of AHWR Critical Facility in BARC, Trombay, Mumbai, India. Small samples (100-200 mg) were also analyzed by IM-NAA for comparison purpose. Radioactive assay was carried out using a 40 % relative efficiency HPGe detector. To examine homogeneity of the sample, counting was also carried out using X-Z rotary scanning unit. In situ relative detection efficiency was evaluated using gamma rays of the activation products in the irradiated sample in the energy range of 122-2,754 keV. Elemental concentration ratios with respect to Na of small size (100 mg mass) as well as large size (15 and 400 g) samples were used to check the homogeneity of the samples. Concentration ratios of 18 elements such as K, Sc, Cr, Mn, Fe, Co, Zn, As, Rb, Cs, La, Ce, Sm, Eu, Yb, Lu, Hf and Th with respect to Na (internal mono standard) were calculated using IM-NAA. Absolute concentrations were arrived at for both large and small samples using Na concentration, obtained from relative method of NAA. The percentage combined uncertainties at ±1 s confidence limit on the determined values were in the range of 3-9 %. Two IAEA reference materials SL-1 and SL-3 were analyzed by IM-NAA to evaluate accuracy of the method. (author)

  17. Neurocognitive impairment in a large sample of homeless adults with mental illness.

    Science.gov (United States)

    Stergiopoulos, V; Cusi, A; Bekele, T; Skosireva, A; Latimer, E; Schütz, C; Fernando, I; Rourke, S B

    2015-04-01

    This study examines neurocognitive functioning in a large, well-characterized sample of homeless adults with mental illness and assesses demographic and clinical factors associated with neurocognitive performance. A total of 1500 homeless adults with mental illness enrolled in the At Home Chez Soi study completed neuropsychological measures assessing speed of information processing, memory, and executive functioning. Sociodemographic and clinical data were also collected. Linear regression analyses were conducted to examine factors associated with neurocognitive performance. Approximately half of our sample met criteria for psychosis, major depressive disorder, and alcohol or substance use disorder, and nearly half had experienced severe traumatic brain injury. Overall, 72% of participants demonstrated cognitive impairment, including deficits in processing speed (48%), verbal learning (71%) and recall (67%), and executive functioning (38%). The overall statistical model explained 19.8% of the variance in the neurocognitive summary score, with reduced neurocognitive performance associated with older age, lower education, first language other than English or French, Black or Other ethnicity, and the presence of psychosis. Homeless adults with mental illness experience impairment in multiple neuropsychological domains. Much of the variance in our sample's cognitive performance remains unexplained, highlighting the need for further research in the mechanisms underlying cognitive impairment in this population. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Early science with the large millimeter telescope: exploring the effect of AGN activity on the relationships between molecular gas, dust, and star formation

    Energy Technology Data Exchange (ETDEWEB)

    Kirkpatrick, Allison; Pope, Alexandra; Calzetti, Daniela; Narayanan, Gopal; Schloerb, F. Peter; Yun, Min S. [Department of Astronomy, University of Massachusetts, Amherst, MA 01002 (United States); Aretxaga, Itziar; Montaña, Alfredo; Vega, Olga [Instituto Nacional de Astrofísica, Optica y Electrónica, Apdos. Postales 51 y 216, C.P. 72000 Puebla, Pue. (Mexico); Armus, Lee [Spitzer Science Center, California Institute of Technology, MS 220-6, Pasadena, CA 91125 (United States); Helou, George [Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125 (United States); Shi, Yong, E-mail: kirkpatr@astro.umass.edu [School of Astronomy and Space Science, Nanjing University, Nanjing, 210093 (China)

    2014-12-01

    The molecular gas, H{sub 2}, that fuels star formation in galaxies is difficult to observe directly. As such, the ratio of L {sub IR} to L{sub CO}{sup ′} is an observational estimate of the star formation rate compared with the amount of molecular gas available to form stars, which is related to the star formation efficiency and the inverse of the gas consumption timescale. We test what effect an IR luminous active galactic nucleus (AGN) has on the ratio L{sub IR}/L{sub CO}{sup ′} in a sample of 24 intermediate redshift galaxies from the 5 mJy Unbiased Spitzer Extragalactic Survey (5MUSES). We obtain new CO(1-0) observations with the Redshift Search Receiver on the Large Millimeter Telescope. We diagnose the presence and strength of an AGN using Spitzer IRS spectroscopy. We find that removing the AGN contribution to L{sub IR}{sup tot} results in a mean L{sub IR}{sup SF}/L{sub CO}{sup ′} for our entire sample consistent with the mean L{sub IR}/L{sub CO}{sup ′} derived for a large sample of star forming galaxies from z ∼ 0-3. We also include in our comparison the relative amount of polycyclic aromatic hydrocarbon emission for our sample and a literature sample of local and high-redshift ultra luminous infrared galaxies and find a consistent trend between L{sub 6.2}/L{sub IR}{sup SF} and L{sub IR}{sup SF}/L{sub CO}{sup ′}, such that small dust grain emission decreases with increasing L{sub IR}{sup SF}/L{sub CO}{sup ′} for both local and high-redshift dusty galaxies.

  19. The large-scale environment from cosmological simulations - I. The baryonic cosmic web

    Science.gov (United States)

    Cui, Weiguang; Knebe, Alexander; Yepes, Gustavo; Yang, Xiaohu; Borgani, Stefano; Kang, Xi; Power, Chris; Staveley-Smith, Lister

    2018-01-01

    Using a series of cosmological simulations that includes one dark-matter-only (DM-only) run, one gas cooling-star formation-supernova feedback (CSF) run and one that additionally includes feedback from active galactic nuclei (AGNs), we classify the large-scale structures with both a velocity-shear-tensor code (VWEB) and a tidal-tensor code (PWEB). We find that the baryonic processes have almost no impact on large-scale structures - at least not when classified using aforementioned techniques. More importantly, our results confirm that the gas component alone can be used to infer the filamentary structure of the universe practically un-biased, which could be applied to cosmology constraints. In addition, the gas filaments are classified with its velocity (VWEB) and density (PWEB) fields, which can theoretically connect to the radio observations, such as H I surveys. This will help us to bias-freely link the radio observations with dark matter distributions at large scale.

  20. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    International Nuclear Information System (INIS)

    Kasviki, K.; Stamatelatos, I.E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-01-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv

  1. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kasviki, K. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece); Stamatelatos, I.E. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece)], E-mail: ion@ipta.demokritos.gr; Yannakopoulou, E. [Institute of Physical Chemistry, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Papadopoulou, P. [Institute of Technology of Agricultural Products, NAGREF, Lycovrissi, Attikis 14123 (Greece); Kalef-Ezra, J. [Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece)

    2007-10-15

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  2. Large sample NAA of a pottery replica utilizing thermal neutron flux at AHWR critical facility and X-Z rotary scanning unit

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2013-01-01

    Large sample neutron activation analysis (LSNAA) of a clay pottery replica from Peru was carried out using low neutron flux graphite reflector position of Advanced Heavy Water Reactor (AHWR) critical facility. This work was taken up as a part of inter-comparison exercise under IAEA CRP on LSNAA of archaeological objects. Irradiated large size sample, placed on an X-Z rotary scanning unit, was assayed using a 40% relative efficiency HPGe detector. The k 0 -based internal monostandard NAA (IM-NAA) in conjunction with insitu relative detection efficiency was used to calculate concentration ratios of 12 elements with respect to Na. Analyses of both small and large size samples were carried out to check homogeneity and to arrive at absolute concentrations. (author)

  3. Weighted Moments Estimators of the Parameters for the Extreme Value Distribution Based on the Multiply Type II Censored Sample

    Directory of Open Access Journals (Sweden)

    Jong-Wuu Wu

    2013-01-01

    Full Text Available We propose the weighted moments estimators (WMEs of the location and scale parameters for the extreme value distribution based on the multiply type II censored sample. Simulated mean squared errors (MSEs of best linear unbiased estimator (BLUE and exact MSEs of WMEs are compared to study the behavior of different estimation methods. The results show the best estimator among the WMEs and BLUE under different combinations of censoring schemes.

  4. An unbiased expression screen for synaptogenic proteins identifies the LRRTM protein family as synaptic organizers.

    Science.gov (United States)

    Linhoff, Michael W; Laurén, Juha; Cassidy, Robert M; Dobie, Frederick A; Takahashi, Hideto; Nygaard, Haakon B; Airaksinen, Matti S; Strittmatter, Stephen M; Craig, Ann Marie

    2009-03-12

    Delineating the molecular basis of synapse development is crucial for understanding brain function. Cocultures of neurons with transfected fibroblasts have demonstrated the synapse-promoting activity of candidate molecules. Here, we performed an unbiased expression screen for synaptogenic proteins in the coculture assay using custom-made cDNA libraries. Reisolation of NGL-3/LRRC4B and neuroligin-2 accounts for a minority of positive clones, indicating that current understanding of mammalian synaptogenic proteins is incomplete. We identify LRRTM1 as a transmembrane protein that induces presynaptic differentiation in contacting axons. All four LRRTM family members exhibit synaptogenic activity, LRRTMs localize to excitatory synapses, and artificially induced clustering of LRRTMs mediates postsynaptic differentiation. We generate LRRTM1(-/-) mice and reveal altered distribution of the vesicular glutamate transporter VGLUT1, confirming an in vivo synaptic function. These results suggest a prevalence of LRR domain proteins in trans-synaptic signaling and provide a cellular basis for the reported linkage of LRRTM1 to handedness and schizophrenia.

  5. Mutually orthogonal Latin squares from the inner products of vectors in mutually unbiased bases

    International Nuclear Information System (INIS)

    Hall, Joanne L; Rao, Asha

    2010-01-01

    Mutually unbiased bases (MUBs) are important in quantum information theory. While constructions of complete sets of d + 1 MUBs in C d are known when d is a prime power, it is unknown if such complete sets exist in non-prime power dimensions. It has been conjectured that complete sets of MUBs only exist in C d if a maximal set of mutually orthogonal Latin squares (MOLS) of side length d also exists. There are several constructions (Roy and Scott 2007 J. Math. Phys. 48 072110; Paterek, Dakic and Brukner 2009 Phys. Rev. A 79 012109) of complete sets of MUBs from specific types of MOLS, which use Galois fields to construct the vectors of the MUBs. In this paper, two known constructions of MUBs (Alltop 1980 IEEE Trans. Inf. Theory 26 350-354; Wootters and Fields 1989 Ann. Phys. 191 363-381), both of which use polynomials over a Galois field, are used to construct complete sets of MOLS in the odd prime case. The MOLS come from the inner products of pairs of vectors in the MUBs.

  6. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  7. Charge reconstruction in large-area photomultipliers

    Science.gov (United States)

    Grassi, M.; Montuschi, M.; Baldoncini, M.; Mantovani, F.; Ricci, B.; Andronico, G.; Antonelli, V.; Bellato, M.; Bernieri, E.; Brigatti, A.; Brugnera, R.; Budano, A.; Buscemi, M.; Bussino, S.; Caruso, R.; Chiesa, D.; Corti, D.; Dal Corso, F.; Ding, X. F.; Dusini, S.; Fabbri, A.; Fiorentini, G.; Ford, R.; Formozov, A.; Galet, G.; Garfagnini, A.; Giammarchi, M.; Giaz, A.; Insolia, A.; Isocrate, R.; Lippi, I.; Longhitano, F.; Lo Presti, D.; Lombardi, P.; Marini, F.; Mari, S. M.; Martellini, C.; Meroni, E.; Mezzetto, M.; Miramonti, L.; Monforte, S.; Nastasi, M.; Ortica, F.; Paoloni, A.; Parmeggiano, S.; Pedretti, D.; Pelliccia, N.; Pompilio, R.; Previtali, E.; Ranucci, G.; Re, A. C.; Romani, A.; Saggese, P.; Salamanna, G.; Sawy, F. H.; Settanta, G.; Sisti, M.; Sirignano, C.; Spinetti, M.; Stanco, L.; Strati, V.; Verde, G.; Votano, L.

    2018-02-01

    Large-area PhotoMultiplier Tubes (PMT) allow to efficiently instrument Liquid Scintillator (LS) neutrino detectors, where large target masses are pivotal to compensate for neutrinos' extremely elusive nature. Depending on the detector light yield, several scintillation photons stemming from the same neutrino interaction are likely to hit a single PMT in a few tens/hundreds of nanoseconds, resulting in several photoelectrons (PEs) to pile-up at the PMT anode. In such scenario, the signal generated by each PE is entangled to the others, and an accurate PMT charge reconstruction becomes challenging. This manuscript describes an experimental method able to address the PMT charge reconstruction in the case of large PE pile-up, providing an unbiased charge estimator at the permille level up to 15 detected PEs. The method is based on a signal filtering technique (Wiener filter) which suppresses the noise due to both PMT and readout electronics, and on a Fourier-based deconvolution able to minimize the influence of signal distortions—such as an overshoot. The analysis of simulated PMT waveforms shows that the slope of a linear regression modeling the relation between reconstructed and true charge values improves from 0.769 ± 0.001 (without deconvolution) to 0.989 ± 0.001 (with deconvolution), where unitary slope implies perfect reconstruction. A C++ implementation of the charge reconstruction algorithm is available online at [1].

  8. A Volume-Limited Sample of L and T Dwarfs Defined by Parallaxes

    Science.gov (United States)

    Best, William M. J.; Liu, Michael C.; Magnier, Eugene; Dupuy, Trent

    2018-01-01

    Volume-limited samples are the gold standard for stellar population studies, as they enable unbiased measurements of space densities and luminosity functions. Parallaxes are the most direct measures of distance and are therefore essential for defining high-confidence volume limited samples. Previous efforts to model the local brown dwarf population were hampered by samples based on a small number of parallaxes. We are using UKIRT/WFCAM to conduct the largest near-infrared program to date to measure parallaxes and proper motions of L and T dwarfs. For the past 3+ years we have monitored over 350 targets, ≈90% of which are too faint to be observed by Gaia. We present preliminary results from our observations. Our program more than doubles the number of known L and T dwarf parallaxes, defining a volume-limited sample of ≈400 L0-T6 dwarfs out to 25 parsecs, the first L and T dwarf sample of this size and depth based entirely on parallaxes. Our sample will combine with the upcoming stellar census from Gaia DR2 parallaxes to form a complete volume-limited sample of nearby stars and brown dwarfs.

  9. "Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2011-09-01

    Full Text Available Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found not to have modeled the analyses to take account of the complex sample (Johnson & Elliott, 1998 even when publishing in highly-regarded journals. It is well known that failure to appropriately model the complex sample can substantially bias the results of the analysis. Examples presented in this paper highlight the risk of error of inference and mis-estimation of parameters from failure to analyze these data sets appropriately.

  10. Large scale sample management and data analysis via MIRACLE

    DEFF Research Database (Denmark)

    Block, Ines; List, Markus; Pedersen, Marlene Lemvig

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. In the past years the technology advanced based on improved methods and protocols concerning sample preparation and printing, antibody selection, optimization...... of staining conditions and mode of signal analysis. However, the sample management and data analysis still poses challenges because of the high number of samples, sample dilutions, customized array patterns, and various programs necessary for array construction and data processing. We developed...... a comprehensive and user-friendly web application called MIRACLE (MIcroarray R-based Analysis of Complex Lysate Experiments), which bridges the gap between sample management and array analysis by conveniently keeping track of the sample information from lysate preparation, through array construction and signal...

  11. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students

    OpenAIRE

    Song Wang; Ming Zhou; Taolin Chen; Xun Yang; Guangxiang Chen; Meiyun Wang; Qiyong Gong

    2017-01-01

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphome...

  12. Best linear unbiased prediction of genomic breeding values using a trait-specific marker-derived relationship matrix.

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2010-09-01

    Full Text Available With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome, genetic merit of genotyped individuals can be predicted directly within the framework of mixed model equations, by using a matrix of relationships among individuals that is derived from the markers. Here we extend that approach by deriving a marker-based relationship matrix specifically for the trait of interest.In the framework of mixed model equations, a new best linear unbiased prediction (BLUP method including a trait-specific relationship matrix (TA was presented and termed TABLUP. The TA matrix was constructed on the basis of marker genotypes and their weights in relation to the trait of interest. A simulation study with 1,000 individuals as the training population and five successive generations as candidate population was carried out to validate the proposed method. The proposed TABLUP method outperformed the ridge regression BLUP (RRBLUP and BLUP with realized relationship matrix (GBLUP. It performed slightly worse than BayesB with an accuracy of 0.79 in the standard scenario.The proposed TABLUP method is an improvement of the RRBLUP and GBLUP method. It might be equivalent to the BayesB method but it has additional benefits like the calculation of accuracies for individual breeding values. The results also showed that the TA-matrix performs better in predicting ability than the classical numerator relationship matrix and the realized relationship matrix which are derived solely from pedigree or markers without regard to the trait. This is because the TA-matrix not only accounts for the Mendelian sampling term, but also puts the greater emphasis on those markers that explain more of the genetic variance in the trait.

  13. Crowdsourcing for large-scale mosquito (Diptera: Culicidae) sampling

    Science.gov (United States)

    Sampling a cosmopolitan mosquito (Diptera: Culicidae) species throughout its range is logistically challenging and extremely resource intensive. Mosquito control programmes and regional networks operate at the local level and often conduct sampling activities across much of North America. A method f...

  14. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    Science.gov (United States)

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  15. SyPRID sampler: A large-volume, high-resolution, autonomous, deep-ocean precision plankton sampling system

    Science.gov (United States)

    Billings, Andrew; Kaiser, Carl; Young, Craig M.; Hiebert, Laurel S.; Cole, Eli; Wagner, Jamie K. S.; Van Dover, Cindy Lee

    2017-03-01

    The current standard for large-volume (thousands of cubic meters) zooplankton sampling in the deep sea is the MOCNESS, a system of multiple opening-closing nets, typically lowered to within 50 m of the seabed and towed obliquely to the surface to obtain low-spatial-resolution samples that integrate across 10 s of meters of water depth. The SyPRID (Sentry Precision Robotic Impeller Driven) sampler is an innovative, deep-rated (6000 m) plankton sampler that partners with the Sentry Autonomous Underwater Vehicle (AUV) to obtain paired, large-volume plankton samples at specified depths and survey lines to within 1.5 m of the seabed and with simultaneous collection of sensor data. SyPRID uses a perforated Ultra-High-Molecular-Weight (UHMW) plastic tube to support a fine mesh net within an outer carbon composite tube (tube-within-a-tube design), with an axial flow pump located aft of the capture filter. The pump facilitates flow through the system and reduces or possibly eliminates the bow wave at the mouth opening. The cod end, a hollow truncated cone, is also made of UHMW plastic and includes a collection volume designed to provide an area where zooplankton can collect, out of the high flow region. SyPRID attaches as a saddle-pack to the Sentry vehicle. Sentry itself is configured with a flight control system that enables autonomous survey paths to low altitudes. In its verification deployment at the Blake Ridge Seep (2160 m) on the US Atlantic Margin, SyPRID was operated for 6 h at an altitude of 5 m. It recovered plankton samples, including delicate living larvae, from the near-bottom stratum that is seldom sampled by a typical MOCNESS tow. The prototype SyPRID and its next generations will enable studies of plankton or other particulate distributions associated with localized physico-chemical strata in the water column or above patchy habitats on the seafloor.

  16. Multifractals embedded in short time series: An unbiased estimation of probability moment

    Science.gov (United States)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  17. Development of an unbiased statistical method for the analysis of unigenic evolution

    Directory of Open Access Journals (Sweden)

    Shilton Brian H

    2006-03-01

    Full Text Available Abstract Background Unigenic evolution is a powerful genetic strategy involving random mutagenesis of a single gene product to delineate functionally important domains of a protein. This method involves selection of variants of the protein which retain function, followed by statistical analysis comparing expected and observed mutation frequencies of each residue. Resultant mutability indices for each residue are averaged across a specified window of codons to identify hypomutable regions of the protein. As originally described, the effect of changes to the length of this averaging window was not fully eludicated. In addition, it was unclear when sufficient functional variants had been examined to conclude that residues conserved in all variants have important functional roles. Results We demonstrate that the length of averaging window dramatically affects identification of individual hypomutable regions and delineation of region boundaries. Accordingly, we devised a region-independent chi-square analysis that eliminates loss of information incurred during window averaging and removes the arbitrary assignment of window length. We also present a method to estimate the probability that conserved residues have not been mutated simply by chance. In addition, we describe an improved estimation of the expected mutation frequency. Conclusion Overall, these methods significantly extend the analysis of unigenic evolution data over existing methods to allow comprehensive, unbiased identification of domains and possibly even individual residues that are essential for protein function.

  18. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  19. ISOTROPIC LUMINOSITY INDICATORS IN A COMPLETE AGN SAMPLE

    International Nuclear Information System (INIS)

    Diamond-Stanic, Aleksandar M.; Rieke, George H.; Rigby, Jane R.

    2009-01-01

    The [O IV] λ25.89 μm line has been shown to be an accurate indicator of active galactic nucleus (AGN) intrinsic luminosity in that it correlates well with hard (10-200 keV) X-ray emission. We present measurements of [O IV] for 89 Seyfert galaxies from the unbiased revised Shapley-Ames (RSA) sample. The [O IV] luminosity distributions of obscured and unobscured Seyferts are indistinguishable, indicating that their intrinsic AGN luminosities are quite similar and that the RSA sample is well suited for tests of the unified model. In addition, we analyze several commonly used proxies for AGN luminosity, including [O III] λ5007 A, 6 cm radio, and 2-10 keV X-ray emission. We find that the radio luminosity distributions of obscured and unobscured AGNs show no significant difference, indicating that radio luminosity is a useful isotropic luminosity indicator. However, the observed [O III] and 2-10 keV luminosities are systematically smaller for obscured Seyferts, indicating that they are not emitted isotropically.

  20. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  1. Using machine learning to disentangle homonyms in large text corpora.

    Science.gov (United States)

    Roll, Uri; Correia, Ricardo A; Berger-Tal, Oded

    2018-06-01

    Systematic reviews are an increasingly popular decision-making tool that provides an unbiased summary of evidence to support conservation action. These reviews bridge the gap between researchers and managers by presenting a comprehensive overview of all studies relating to a particular topic and identify specifically where and under which conditions an effect is present. However, several technical challenges can severely hinder the feasibility and applicability of systematic reviews, for example, homonyms (terms that share spelling but differ in meaning). Homonyms add noise to search results and cannot be easily identified or removed. We developed a semiautomated approach that can aid in the classification of homonyms among narratives. We used a combination of automated content analysis and artificial neural networks to quickly and accurately sift through large corpora of academic texts and classify them to distinct topics. As an example, we explored the use of the word reintroduction in academic texts. Reintroduction is used within the conservation context to indicate the release of organisms to their former native habitat; however, a Web of Science search for this word returned thousands of publications in which the term has other meanings and contexts. Using our method, we automatically classified a sample of 3000 of these publications with over 99% accuracy, relative to a manual classification. Our approach can be used easily with other homonyms and can greatly facilitate systematic reviews or similar work in which homonyms hinder the harnessing of large text corpora. Beyond homonyms we see great promise in combining automated content analysis and machine-learning methods to handle and screen big data for relevant information in conservation science. © 2017 Society for Conservation Biology.

  2. Unbiased and non-supervised learning methods for disruption prediction at JET

    International Nuclear Information System (INIS)

    Murari, A.; Vega, J.; Ratta, G.A.; Vagliasindi, G.; Johnson, M.F.; Hong, S.H.

    2009-01-01

    The importance of predicting the occurrence of disruptions is going to increase significantly in the next generation of tokamak devices. The expected energy content of ITER plasmas, for example, is such that disruptions could have a significant detrimental impact on various parts of the device, ranging from erosion of plasma facing components to structural damage. Early detection of disruptions is therefore needed with evermore increasing urgency. In this paper, the results of a series of methods to predict disruptions at JET are reported. The main objective of the investigation consists of trying to determine how early before a disruption it is possible to perform acceptable predictions on the basis of the raw data, keeping to a minimum the number of 'ad hoc' hypotheses. Therefore, the chosen learning techniques have the common characteristic of requiring a minimum number of assumptions. Classification and Regression Trees (CART) is a supervised but, on the other hand, a completely unbiased and nonlinear method, since it simply constructs the best classification tree by working directly on the input data. A series of unsupervised techniques, mainly K-means and hierarchical, have also been tested, to investigate to what extent they can autonomously distinguish between disruptive and non-disruptive groups of discharges. All these independent methods indicate that, in general, prediction with a success rate above 80% can be achieved not earlier than 180 ms before the disruption. The agreement between various completely independent methods increases the confidence in the results, which are also confirmed by a visual inspection of the data performed with pseudo Grand Tour algorithms.

  3. Ultrasensitive multiplex optical quantification of bacteria in large samples of biofluids

    Science.gov (United States)

    Pazos-Perez, Nicolas; Pazos, Elena; Catala, Carme; Mir-Simon, Bernat; Gómez-de Pedro, Sara; Sagales, Juan; Villanueva, Carlos; Vila, Jordi; Soriano, Alex; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.

    2016-01-01

    Efficient treatments in bacterial infections require the fast and accurate recognition of pathogens, with concentrations as low as one per milliliter in the case of septicemia. Detecting and quantifying bacteria in such low concentrations is challenging and typically demands cultures of large samples of blood (~1 milliliter) extending over 24–72 hours. This delay seriously compromises the health of patients. Here we demonstrate a fast microorganism optical detection system for the exhaustive identification and quantification of pathogens in volumes of biofluids with clinical relevance (~1 milliliter) in minutes. We drive each type of bacteria to accumulate antibody functionalized SERS-labelled silver nanoparticles. Particle aggregation on the bacteria membranes renders dense arrays of inter-particle gaps in which the Raman signal is exponentially amplified by several orders of magnitude relative to the dispersed particles. This enables a multiplex identification of the microorganisms through the molecule-specific spectral fingerprints. PMID:27364357

  4. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Science.gov (United States)

    Dufour, Nicholas; Redcay, Elizabeth; Young, Liane; Mavros, Penelope L; Moran, Joseph M; Triantafyllou, Christina; Gabrieli, John D E; Saxe, Rebecca

    2013-01-01

    Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462) individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31), using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  5. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Directory of Open Access Journals (Sweden)

    Nicholas Dufour

    Full Text Available Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462 individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31, using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  6. Presence and significant determinants of cognitive impairment in a large sample of patients with multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Martina Borghi

    Full Text Available OBJECTIVES: To investigate the presence and the nature of cognitive impairment in a large sample of patients with Multiple Sclerosis (MS, and to identify clinical and demographic determinants of cognitive impairment in MS. METHODS: 303 patients with MS and 279 healthy controls were administered the Brief Repeatable Battery of Neuropsychological tests (BRB-N; measures of pre-morbid verbal competence and neuropsychiatric measures were also administered. RESULTS: Patients and healthy controls were matched for age, gender, education and pre-morbid verbal Intelligence Quotient. Patients presenting with cognitive impairment were 108/303 (35.6%. In the overall group of participants, the significant predictors of the most sensitive BRB-N scores were: presence of MS, age, education, and Vocabulary. The significant predictors when considering MS patients only were: course of MS, age, education, vocabulary, and depression. Using logistic regression analyses, significant determinants of the presence of cognitive impairment in relapsing-remitting MS patients were: duration of illness (OR = 1.053, 95% CI = 1.010-1.097, p = 0.015, Expanded Disability Status Scale score (OR = 1.247, 95% CI = 1.024-1.517, p = 0.028, and vocabulary (OR = 0.960, 95% CI = 0.936-0.984, p = 0.001, while in the smaller group of progressive MS patients these predictors did not play a significant role in determining the cognitive outcome. CONCLUSIONS: Our results corroborate the evidence about the presence and the nature of cognitive impairment in a large sample of patients with MS. Furthermore, our findings identify significant clinical and demographic determinants of cognitive impairment in a large sample of MS patients for the first time. Implications for further research and clinical practice were discussed.

  7. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages.

    Science.gov (United States)

    Peterman, William; Brocato, Emily R; Semlitsch, Raymond D; Eggert, Lori S

    2016-01-01

    In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (F ST and D C ) and isolation-by-distance (IBD) among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using D C , the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  8. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages

    Directory of Open Access Journals (Sweden)

    William Peterman

    2016-03-01

    Full Text Available In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (FST and DC and isolation-by-distance (IBD among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using DC, the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  9. A large-scale cryoelectronic system for biological sample banking

    Science.gov (United States)

    Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.

    2009-11-01

    We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.

  10. Detecting superior face recognition skills in a large sample of young British adults

    Directory of Open Access Journals (Sweden)

    Anna Katarzyna Bobak

    2016-09-01

    Full Text Available The Cambridge Face Memory Test Long Form (CFMT+ and Cambridge Face Perception Test (CFPT are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognisers are discussed.

  11. Unraveling cognitive traits using the Morris water maze unbiased strategy classification (MUST-C) algorithm.

    Science.gov (United States)

    Illouz, Tomer; Madar, Ravit; Louzon, Yoram; Griffioen, Kathleen J; Okun, Eitan

    2016-02-01

    The assessment of spatial cognitive learning in rodents is a central approach in neuroscience, as it enables one to assess and quantify the effects of treatments and genetic manipulations from a broad perspective. Although the Morris water maze (MWM) is a well-validated paradigm for testing spatial learning abilities, manual categorization of performance in the MWM into behavioral strategies is subject to individual interpretation, and thus to biases. Here we offer a support vector machine (SVM) - based, automated, MWM unbiased strategy classification (MUST-C) algorithm, as well as a cognitive score scale. This model was examined and validated by analyzing data obtained from five MWM experiments with changing platform sizes, revealing a limitation in the spatial capacity of the hippocampus. We have further employed this algorithm to extract novel mechanistic insights on the impact of members of the Toll-like receptor pathway on cognitive spatial learning and memory. The MUST-C algorithm can greatly benefit MWM users as it provides a standardized method of strategy classification as well as a cognitive scoring scale, which cannot be derived from typical analysis of MWM data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Spectroscopic Properties of Star-Forming Host Galaxies and Type Ia Supernova Hubble Residuals in a Nearly Unbiased Sample

    Energy Technology Data Exchange (ETDEWEB)

    D' Andrea, Chris B. [Univ. of Pennsylvania, Philadelphia, PA (United States); et al.

    2011-12-20

    We examine the correlation between supernova host galaxy properties and their residuals on the Hubble diagram. We use supernovae discovered during the Sloan Digital Sky Survey II - Supernova Survey, and focus on objects at a redshift of z < 0.15, where the selection effects of the survey are known to yield a complete Type Ia supernova sample. To minimize the bias in our analysis with respect to measured host-galaxy properties, spectra were obtained for nearly all hosts, spanning a range in magnitude of -23 < M_r < -17. In contrast to previous works that use photometric estimates of host mass as a proxy for global metallicity, we analyze host-galaxy spectra to obtain gas-phase metallicities and star-formation rates from host galaxies with active star formation. From a final sample of ~ 40 emission-line galaxies, we find that light-curve corrected Type Ia supernovae are ~ 0.1 magnitudes brighter in high-metallicity hosts than in low-metallicity hosts. We also find a significant (> 3{\\sigma}) correlation between the Hubble residuals of Type Ia supernovae and the specific star-formation rate of the host galaxy. We comment on the importance of supernova/host-galaxy correlations as a source of systematic bias in future deep supernova surveys.

  13. Sampling for stereology in lungs

    Directory of Open Access Journals (Sweden)

    J. R. Nyengaard

    2006-12-01

    Full Text Available The present article reviews the relevant stereological estimators for obtaining reliable quantitative structural data from the lungs. Stereological sampling achieves reliable, quantitative information either about the whole lung or complete lobes, whilst minimising the workload. Studies have used systematic random sampling, which has fixed and constant sampling probabilities on all blocks, sections and fields of view. For an estimation of total lung or lobe volume, the Cavalieri principle can be used, but it is not useful in estimating individual cell volume due to various effects from over- or underprojection. If the number of certain structures is required, two methods can be used: the disector and the fractionator. The disector method is a three-dimensional stereological probe for sampling objects according to their number. However, it may be affected on tissue deformation and, therefore, the fractionator method is often the preferred sampling principle. In this method, a known and predetermined fraction of an object is sampled in one or more steps, with the final step estimating the number. Both methods can be performed in a physical and optical manner, therefore enabling cells and larger lung structure numbers (e.g. number of alveoli to be estimated. Some estimators also require randomisation of orientation, so that all directions have an equal chance of being chosen. Using such isotropic sections, surface area, length, and diameter can be estimated on a Cavalieri set of sections. Stereology can also illustrate the potential for transport between two compartments by analysing the barrier width. Estimating the individual volume of cells can be achieved by local stereology using a two-step procedure that first samples lung cells using the disector and then introduces individual volume estimation of the sampled cells. The coefficient of error of most unbiased stereological estimators is a combination of variance from blocks, sections, fields

  14. Prevalence of learned grapheme-color pairings in a large online sample of synesthetes.

    Directory of Open Access Journals (Sweden)

    Nathan Witthoft

    Full Text Available In this paper we estimate the minimum prevalence of grapheme-color synesthetes with letter-color matches learned from an external stimulus, by analyzing a large sample of English-speaking grapheme-color synesthetes. We find that at least 6% (400/6588 participants of the total sample learned many of their matches from a widely available colored letter toy. Among those born in the decade after the toy began to be manufactured, the proportion of synesthetes with learned letter-color pairings approaches 15% for some 5-year periods. Among those born 5 years or more before it was manufactured, none have colors learned from the toy. Analysis of the letter-color matching data suggests the only difference between synesthetes with matches to the toy and those without is exposure to the stimulus. These data indicate learning of letter-color pairings from external contingencies can occur in a substantial fraction of synesthetes, and are consistent with the hypothesis that grapheme-color synesthesia is a kind of conditioned mental imagery.

  15. Large magnitude gridded ionization chamber for impurity identification in alpha emitting radioactive samples

    International Nuclear Information System (INIS)

    Santos, R.N. dos.

    1992-01-01

    This paper refers to a large magnitude gridded ionization chamber with high resolution used in the identification of α radioactive samples. The chamber and the electrode have been described in terms of their geometry and dimensions, as well as the best results listed accordingly. Several α emitting radioactive samples were used with a gas mixture of 90% Argon plus 10% Methane. We got α energy spectrum with resolution around 22,14 KeV in agreement to the best results available in the literature. The spectrum of α energy related to 92 U 233 was gotten using the ionization chamber mentioned in this work; several values were found which matched perfectly well adjustment curve of the chamber. Many other additional measures using different kinds of adjusted detectors were successfully obtained in order to confirm the results gotten in the experiments, thus leading to the identification of some elements of the 92 U 233 radioactive series. Such results show the possibility of using the chamber mentioned for measurements of α low activity contamination. (author)

  16. Analysis of plant hormones by microemulsion electrokinetic capillary chromatography coupled with on-line large volume sample stacking.

    Science.gov (United States)

    Chen, Zongbao; Lin, Zian; Zhang, Lin; Cai, Yan; Zhang, Lan

    2012-04-07

    A novel method of microemulsion electrokinetic capillary chromatography (MEEKC) coupled with on-line large volume sample stacking was developed for the analysis of six plant hormones including indole-3-acetic acid, indole-3-butyric acid, indole-3-propionic acid, 1-naphthaleneacetic acid, abscisic acid and salicylic acid. Baseline separation of six plant hormones was achieved within 10 min by using the microemulsion background electrolyte containing a 97.2% (w/w) 10 mM borate buffer at pH 9.2, 1.0% (w/w) ethyl acetate as oil droplets, 0.6% (w/w) sodium dodecyl sulphate as surfactant and 1.2% (w/w) 1-butanol as cosurfactant. In addition, an on-line concentration method based on a large volume sample stacking technique and multiple wavelength detection was adopted for improving the detection sensitivity in order to determine trace level hormones in a real sample. The optimal method provided about 50-100 fold increase in detection sensitivity compared with a single MEEKC method, and the detection limits (S/N = 3) were between 0.005 and 0.02 μg mL(-1). The proposed method was simple, rapid and sensitive and could be applied to the determination of six plant hormones in spiked water samples, tobacco leaves and 1-naphthylacetic acid in leaf fertilizer. The recoveries ranged from 76.0% to 119.1%, and good reproducibilities were obtained with relative standard deviations (RSDs) less than 6.6%.

  17. The perils of straying from protocol: sampling bias and interviewer effects.

    Directory of Open Access Journals (Sweden)

    Carrie J Ngongo

    Full Text Available Fidelity to research protocol is critical. In a contingent valuation study in an informal urban settlement in Nairobi, Kenya, participants responded differently to the three trained interviewers. Interviewer effects were present during the survey pilot, then magnified at the start of the main survey after a seemingly slight adaptation of the survey sampling protocol allowed interviewers to speak with the "closest neighbor" in the event that no one was home at a selected household. This slight degree of interviewer choice led to inferred sampling bias. Multinomial logistic regression and post-estimation tests revealed that the three interviewers' samples differed significantly from one another according to six demographic characteristics. The two female interviewers were 2.8 and 7.7 times less likely to talk with respondents of low socio-economic status than the male interviewer. Systematic error renders it impossible to determine which of the survey responses might be "correct." This experience demonstrates why researchers must take care to strictly follow sampling protocols, consistently train interviewers, and monitor responses by interview to ensure similarity between interviewers' groups and produce unbiased estimates of the parameters of interest.

  18. DISCLOSING THE RADIO LOUDNESS DISTRIBUTION DICHOTOMY IN QUASARS: AN UNBIASED MONTE CARLO APPROACH APPLIED TO THE SDSS-FIRST QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Balokovic, M. [Department of Astronomy, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Smolcic, V. [Argelander-Institut fuer Astronomie, Auf dem Hugel 71, D-53121 Bonn (Germany); Ivezic, Z. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Zamorani, G. [INAF-Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Schinnerer, E. [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, D-69117 Heidelberg (Germany); Kelly, B. C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States)

    2012-11-01

    We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 {+-} 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.

  19. DISCLOSING THE RADIO LOUDNESS DISTRIBUTION DICHOTOMY IN QUASARS: AN UNBIASED MONTE CARLO APPROACH APPLIED TO THE SDSS-FIRST QUASAR SAMPLE

    International Nuclear Information System (INIS)

    Baloković, M.; Smolčić, V.; Ivezić, Ž.; Zamorani, G.; Schinnerer, E.; Kelly, B. C.

    2012-01-01

    We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 ± 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.

  20. $b$-Tagging and Large Radius Jet Modelling in a $g\\rightarrow b\\bar{b}$ rich sample at ATLAS

    CERN Document Server

    Jiang, Zihao; The ATLAS collaboration

    2016-01-01

    Studies of b-tagging performance and jet properties in double b-tagged, large radius jets from sqrt(s)=8 TeV pp collisions recorded by the ATLAS detector at the LHC are presented. The double b-tag requirement yields a sample rich in high pT jets originating from the g->bb process. Using this sample, the performance of b-tagging and modelling of jet substructure variables at small b-quark angular separation is probed.

  1. A large-capacity sample-changer for automated gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An automatic sample-changer has been developed at the National Institute for Metallurgy for use in gamma-ray spectroscopy with a lithium-drifted germanium detector. The sample-changer features remote storage, which prevents cross-talk and reduces background. It has a capacity for 200 samples and a sample container that takes liquid or solid samples. The rotation and vibration of samples during counting ensure that powdered samples are compacted, and improve the precision and reproducibility of the counting geometry [af

  2. Economic and Humanistic Burden of Osteoarthritis: A Systematic Review of Large Sample Studies.

    Science.gov (United States)

    Xie, Feng; Kovic, Bruno; Jin, Xuejing; He, Xiaoning; Wang, Mengxiao; Silvestre, Camila

    2016-11-01

    Osteoarthritis (OA) consumes a significant amount of healthcare resources, and impairs the health-related quality of life (HRQoL) of patients. Previous reviews have consistently found substantial variations in the costs of OA across studies and countries. The comparability between studies was poor and limited the detection of the true differences between these studies. To review large sample studies on measuring the economic and/or humanistic burden of OA published since May 2006. We searched MEDLINE and EMBASE databases using comprehensive search strategies to identify studies reporting economic burden and HRQoL of OA. We included large sample studies if they had a sample size ≥1000 and measured the cost and/or HRQoL of OA. Reviewers worked independently and in duplicate, performing a cross-check between groups to verify agreement. Within- and between-group consolidation was performed to resolve discrepancies, with outstanding discrepancies being resolved by an arbitrator. The Kappa statistic was reported to assess the agreement between the reviewers. All costs were adjusted in their original currency to year 2015 using published inflation rates for the country where the study was conducted, and then converted to 2015 US dollars. A total of 651 articles were screened by title and abstract, 94 were reviewed in full text, and 28 were included in the final review. The Kappa value was 0.794. Twenty studies reported direct costs and nine reported indirect costs. The total annual average direct costs varied from US$1442 to US$21,335, both in USA. The annual average indirect costs ranged from US$238 to US$29,935. Twelve studies measured HRQoL using various instruments. The Short Form 12 version 2 scores ranged from 35.0 to 51.3 for the physical component, and from 43.5 to 55.0 for the mental component. Health utilities varied from 0.30 for severe OA to 0.77 for mild OA. Per-patient OA costs are considerable and a patient's quality of life remains poor. Variations in

  3. Classification of large circulating tumor cells isolated with ultra-high throughput microfluidic Vortex technology

    Science.gov (United States)

    Che, James; Yu, Victor; Dhar, Manjima; Renier, Corinne; Matsumoto, Melissa; Heirich, Kyra; Garon, Edward B.; Goldman, Jonathan; Rao, Jianyu; Sledge, George W.; Pegram, Mark D.; Sheth, Shruti; Jeffrey, Stefanie S.; Kulkarni, Rajan P.; Sollier, Elodie; Di Carlo, Dino

    2016-01-01

    Circulating tumor cells (CTCs) are emerging as rare but clinically significant non-invasive cellular biomarkers for cancer patient prognosis, treatment selection, and treatment monitoring. Current CTC isolation approaches, such as immunoaffinity, filtration, or size-based techniques, are often limited by throughput, purity, large output volumes, or inability to obtain viable cells for downstream analysis. For all technologies, traditional immunofluorescent staining alone has been employed to distinguish and confirm the presence of isolated CTCs among contaminating blood cells, although cells isolated by size may express vastly different phenotypes. Consequently, CTC definitions have been non-trivial, researcher-dependent, and evolving. Here we describe a complete set of objective criteria, leveraging well-established cytomorphological features of malignancy, by which we identify large CTCs. We apply the criteria to CTCs enriched from stage IV lung and breast cancer patient blood samples using the High Throughput Vortex Chip (Vortex HT), an improved microfluidic technology for the label-free, size-based enrichment and concentration of rare cells. We achieve improved capture efficiency (up to 83%), high speed of processing (8 mL/min of 10x diluted blood, or 800 μL/min of whole blood), and high purity (avg. background of 28.8±23.6 white blood cells per mL of whole blood). We show markedly improved performance of CTC capture (84% positive test rate) in comparison to previous Vortex designs and the current FDA-approved gold standard CellSearch assay. The results demonstrate the ability to quickly collect viable and pure populations of abnormal large circulating cells unbiased by molecular characteristics, which helps uncover further heterogeneity in these cells. PMID:26863573

  4. Integrated analysis of hematopoietic differentiation outcomes and molecular characterization reveals unbiased differentiation capacity and minor transcriptional memory in HPC/HSC-iPSCs.

    Science.gov (United States)

    Gao, Shuai; Hou, Xinfeng; Jiang, Yonghua; Xu, Zijian; Cai, Tao; Chen, Jiajie; Chang, Gang

    2017-01-23

    Transcription factor-mediated reprogramming can reset the epigenetics of somatic cells into a pluripotency compatible state. Recent studies show that induced pluripotent stem cells (iPSCs) always inherit starting cell-specific characteristics, called epigenetic memory, which may be advantageous, as directed differentiation into specific cell types is still challenging; however, it also may be unpredictable when uncontrollable differentiation occurs. In consideration of biosafety in disease modeling and personalized medicine, the availability of high-quality iPSCs which lack a biased differentiation capacity and somatic memory could be indispensable. Herein, we evaluate the hematopoietic differentiation capacity and somatic memory state of hematopoietic progenitor and stem cell (HPC/HSC)-derived-iPSCs (HPC/HSC-iPSCs) using a previously established sequential reprogramming system. We found that HPC/HSCs are amenable to being reprogrammed into iPSCs with unbiased differentiation capacity to hematopoietic progenitors and mature hematopoietic cells. Genome-wide analyses revealed that no global epigenetic memory was detectable in HPC/HSC-iPSCs, but only a minor transcriptional memory of HPC/HSCs existed in a specific tetraploid complementation (4 N)-incompetent HPC/HSC-iPSC line. However, the observed minor transcriptional memory had no influence on the hematopoietic differentiation capacity, indicating the reprogramming of the HPC/HSCs was nearly complete. Further analysis revealed the correlation of minor transcriptional memory with the aberrant distribution of H3K27me3. This work provides a comprehensive framework for obtaining high-quality iPSCs from HPC/HSCs with unbiased hematopoietic differentiation capacity and minor transcriptional memory.

  5. SIRAH: a structurally unbiased coarse-grained force field for proteins with aqueous solvation and long-range electrostatics.

    Science.gov (United States)

    Darré, Leonardo; Machado, Matías Rodrigo; Brandner, Astrid Febe; González, Humberto Carlos; Ferreira, Sebastián; Pantano, Sergio

    2015-02-10

    Modeling of macromolecular structures and interactions represents an important challenge for computational biology, involving different time and length scales. However, this task can be facilitated through the use of coarse-grained (CG) models, which reduce the number of degrees of freedom and allow efficient exploration of complex conformational spaces. This article presents a new CG protein model named SIRAH, developed to work with explicit solvent and to capture sequence, temperature, and ionic strength effects in a topologically unbiased manner. SIRAH is implemented in GROMACS, and interactions are calculated using a standard pairwise Hamiltonian for classical molecular dynamics simulations. We present a set of simulations that test the capability of SIRAH to produce a qualitatively correct solvation on different amino acids, hydrophilic/hydrophobic interactions, and long-range electrostatic recognition leading to spontaneous association of unstructured peptides and stable structures of single polypeptides and protein-protein complexes.

  6. Cost-Effective Large-Scale Occupancy-Abundance Monitoring of Invasive Brushtail Possums (Trichosurus Vulpecula on New Zealand's Public Conservation Land.

    Directory of Open Access Journals (Sweden)

    Andrew M Gormley

    Full Text Available There is interest in large-scale and unbiased monitoring of biodiversity status and trend, but there are few published examples of such monitoring being implemented. The New Zealand Department of Conservation is implementing a monitoring program that involves sampling selected biota at the vertices of an 8-km grid superimposed over the 8.6 million hectares of public conservation land that it manages. The introduced brushtail possum (Trichosurus Vulpecula is a major threat to some biota and is one taxon that they wish to monitor and report on. A pilot study revealed that the traditional method of monitoring possums using leg-hold traps set for two nights, termed the Trap Catch Index, was a constraint on the cost and logistical feasibility of the monitoring program. A phased implementation of the monitoring program was therefore conducted to collect data for evaluating the trade-off between possum occupancy-abundance estimates and the costs of sampling for one night rather than two nights. Reducing trapping effort from two nights to one night along four trap-lines reduced the estimated costs of monitoring by 5.8% due to savings in labour, food and allowances; it had a negligible effect on estimated national possum occupancy but resulted in slightly higher and less precise estimates of relative possum abundance. Monitoring possums for one night rather than two nights would provide an annual saving of NZ$72,400, with 271 fewer field days required for sampling. Possums occupied 60% (95% credible interval; 53-68 of sampling locations on New Zealand's public conservation land, with a mean relative abundance (Trap Catch Index of 2.7% (2.0-3.5. Possum occupancy and abundance were higher in forest than in non-forest habitats. Our case study illustrates the need to evaluate relationships between sampling design, cost, and occupancy-abundance estimates when designing and implementing large-scale occupancy-abundance monitoring programs.

  7. Improved reproducibility in genome-wide DNA methylation analysis for PAXgene® fixed samples compared to restored FFPE DNA

    DEFF Research Database (Denmark)

    Andersen, Gitte Brinch; Hager, Henrik; Hansen, Lise Lotte

    2014-01-01

    Chip. Quantitative DNA methylation analysis demonstrated that the methylation profile in PAXgene-fixed tissues showed, in comparison with restored FFPE samples, a higher concordance with the profile detected in frozen samples. We demonstrate, for the first time, that DNA from PAXgene conserved tissue performs better......Formalin fixation has been the standard method for conservation of clinical specimens for decades. However, a major drawback is the high degradation of nucleic acids, which complicates its use in genome-wide analyses. Unbiased identification of biomarkers, however, requires genome-wide studies......, precluding the use of the valuable archives of specimens with long-term follow-up data. Therefore, restoration protocols for DNA from formalin-fixed and paraffin-embedded (FFPE) samples have been developed, although they are cost-intensive and time-consuming. An alternative to FFPE and snap...

  8. LOGISTICS OF ECOLOGICAL SAMPLING ON LARGE RIVERS

    Science.gov (United States)

    The objectives of this document are to provide an overview of the logistical problems associated with the ecological sampling of boatable rivers and to suggest solutions to those problems. It is intended to be used as a resource for individuals preparing to collect biological dat...

  9. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    Science.gov (United States)

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  10. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    Science.gov (United States)

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  11. CORRELATION ANALYSIS OF A LARGE SAMPLE OF NARROW-LINE SEYFERT 1 GALAXIES: LINKING CENTRAL ENGINE AND HOST PROPERTIES

    International Nuclear Information System (INIS)

    Xu Dawei; Komossa, S.; Wang Jing; Yuan Weimin; Zhou Hongyan; Lu Honglin; Li Cheng; Grupe, Dirk

    2012-01-01

    We present a statistical study of a large, homogeneously analyzed sample of narrow-line Seyfert 1 (NLS1) galaxies, accompanied by a comparison sample of broad-line Seyfert 1 (BLS1) galaxies. Optical emission-line and continuum properties are subjected to correlation analyses, in order to identify the main drivers of the correlation space of active galactic nuclei (AGNs), and of NLS1 galaxies in particular. For the first time, we have established the density of the narrow-line region as a key parameter in Eigenvector 1 space, as important as the Eddington ratio L/L Edd . This is important because it links the properties of the central engine with the properties of the host galaxy, i.e., the interstellar medium (ISM). We also confirm previously found correlations involving the line width of Hβ and the strength of the Fe II and [O III] λ5007 emission lines, and we confirm the important role played by L/L Edd in driving the properties of NLS1 galaxies. A spatial correlation analysis shows that large-scale environments of the BLS1 and NLS1 galaxies of our sample are similar. If mergers are rare in our sample, accretion-driven winds, on the one hand, or bar-driven inflows, on the other hand, may account for the strong dependence of Eigenvector 1 on ISM density.

  12. Sampling data summary for the ninth run of the Large Slurry Fed Melter

    International Nuclear Information System (INIS)

    Sabatino, D.M.

    1983-01-01

    The ninth experimental run of the Large Slurry Fed Melter (LSFM) was completed June 27, 1983, after 63 days of continuous operation. During the run, the various melter and off-gas streams were sampled and analyzed to determine melter material balances and to characterize off-gas emissions. Sampling methods and preliminary results were reported earlier. The emphasis was on the chemical analyses of the off-gas entrainment, deposits, and scrubber liquid. The significant sampling results from the run are summarized below: Flushing the Frit 165 with Frit 131 without bubbler agitation required 3 to 4.5 melter volumes. The off-gas cesium concentration during feeding was on the order of 36 to 56 μgCs/scf. The cesium concentration in the melter plenum (based on air in leakage only) was on the order of 110 to 210 μgCs/scf. Using <1 micron as the cut point for semivolatile material 60% of the chloride, 35% of the sodium and less than 5% of the managanese and iron in the entrainment are present as semivolatiles. A material balance on the scrubber tank solids shows good agreement with entrainment data. An overall cesium balance using LSFM-9 data and the DWPF production rate indicates an emission of 0.11 mCi/yr of cesium from the DWPF off-gas. This is a factor of 27 less than the maximum allowable 3 mCi/yr

  13. A Survey for Spectroscopic Binaries in a Large Sample of G Dwarfs

    Science.gov (United States)

    Udry, S.; Mayor, M.; Latham, D. W.; Stefanik, R. P.; Torres, G.; Mazeh, T.; Goldberg, D.; Andersen, J.; Nordstrom, B.

    For more than 5 years now, the radial velocities for a large sample of G dwarfs (3,347 stars) have been monitored in order to obtain an unequaled set of orbital parameters for solar-type stars (~400 orbits, up to now). This survey provides a considerable improvement on the classical systematic study by Duquennoy and Mayor (1991; DM91). The observational part of the survey has been carried out in the context of a collaboration between the Geneva Observatory on the two coravel spectrometers for the southern sky and CfA at Oakridge and Whipple Observatories for the northern sky. As a first glance at these new results, we will address in this contribution a special aspect of the orbital eccentricity distribution, namely the disappearance of the void observed in DM91 for quasi-circular orbits with periods larger than 10 days.

  14. Effects of social organization, trap arrangement and density, sampling scale, and population density on bias in population size estimation using some common mark-recapture estimators.

    Directory of Open Access Journals (Sweden)

    Manan Gupta

    Full Text Available Mark-recapture estimators are commonly used for population size estimation, and typically yield unbiased estimates for most solitary species with low to moderate home range sizes. However, these methods assume independence of captures among individuals, an assumption that is clearly violated in social species that show fission-fusion dynamics, such as the Asian elephant. In the specific case of Asian elephants, doubts have been raised about the accuracy of population size estimates. More importantly, the potential problem for the use of mark-recapture methods posed by social organization in general has not been systematically addressed. We developed an individual-based simulation framework to systematically examine the potential effects of type of social organization, as well as other factors such as trap density and arrangement, spatial scale of sampling, and population density, on bias in population sizes estimated by POPAN, Robust Design, and Robust Design with detection heterogeneity. In the present study, we ran simulations with biological, demographic and ecological parameters relevant to Asian elephant populations, but the simulation framework is easily extended to address questions relevant to other social species. We collected capture history data from the simulations, and used those data to test for bias in population size estimation. Social organization significantly affected bias in most analyses, but the effect sizes were variable, depending on other factors. Social organization tended to introduce large bias when trap arrangement was uniform and sampling effort was low. POPAN clearly outperformed the two Robust Design models we tested, yielding close to zero bias if traps were arranged at random in the study area, and when population density and trap density were not too low. Social organization did not have a major effect on bias for these parameter combinations at which POPAN gave more or less unbiased population size estimates

  15. Sampling in schools and large institutional buildings: Implications for regulations, exposure and management of lead and copper.

    Science.gov (United States)

    Doré, Evelyne; Deshommes, Elise; Andrews, Robert C; Nour, Shokoufeh; Prévost, Michèle

    2018-04-21

    Legacy lead and copper components are ubiquitous in plumbing of large buildings including schools that serve children most vulnerable to lead exposure. Lead and copper samples must be collected after varying stagnation times and interpreted in reference to different thresholds. A total of 130 outlets (fountains, bathroom and kitchen taps) were sampled for dissolved and particulate lead as well as copper. Sampling was conducted at 8 schools and 3 institutional (non-residential) buildings served by municipal water of varying corrosivity, with and without corrosion control (CC), and without a lead service line. Samples included first draw following overnight stagnation (>8h), partial (30 s) and fully (5 min) flushed, and first draw after 30 min of stagnation. Total lead concentrations in first draw samples after overnight stagnation varied widely from 0.07 to 19.9 μg Pb/L (median: 1.7 μg Pb/L) for large buildings served with non-corrosive water. Higher concentrations were observed in schools with corrosive water without CC (0.9-201 μg Pb/L, median: 14.3 μg Pb/L), while levels in schools with CC ranged from 0.2 to 45.1 μg Pb/L (median: 2.1 μg Pb/L). Partial flushing (30 s) and full flushing (5 min) reduced concentrations by 88% and 92% respectively for corrosive waters without CC. Lead concentrations were 45% than values in 1st draw samples collected after overnight stagnation. Concentrations of particulate Pb varied widely (≥0.02-846 μg Pb/L) and was found to be the cause of very high total Pb concentrations in the 2% of samples exceeding 50 μg Pb/L. Pb levels across outlets within the same building varied widely (up to 1000X) especially in corrosive water (0.85-851 μg Pb/L after 30MS) confirming the need to sample at each outlet to identify high risk taps. Based on the much higher concentrations observed in first draw samples, even after a short stagnation, the first 250mL should be discarded unless no sources

  16. Sample-based Attribute Selective AnDE for Large Data

    DEFF Research Database (Denmark)

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Henc...

  17. A large sample of Kohonen selected E+A (post-starburst) galaxies from the Sloan Digital Sky Survey

    Science.gov (United States)

    Meusinger, H.; Brünecke, J.; Schalldach, P.; in der Au, A.

    2017-01-01

    Context. The galaxy population in the contemporary Universe is characterised by a clear bimodality, blue galaxies with significant ongoing star formation and red galaxies with only a little. The migration between the blue and the red cloud of galaxies is an issue of active research. Post starburst (PSB) galaxies are thought to be observed in the short-lived transition phase. Aims: We aim to create a large sample of local PSB galaxies from the Sloan Digital Sky Survey (SDSS) to study their characteristic properties, particularly morphological features indicative of gravitational distortions and indications for active galactic nuclei (AGNs). Another aim is to present a tool set for an efficient search in a large database of SDSS spectra based on Kohonen self-organising maps (SOMs). Methods: We computed a huge Kohonen SOM for ∼106 spectra from SDSS data release 7. The SOM is made fully available, in combination with an interactive user interface, for the astronomical community. We selected a large sample of PSB galaxies taking advantage of the clustering behaviour of the SOM. The morphologies of both PSB galaxies and randomly selected galaxies from a comparison sample in SDSS Stripe 82 (S82) were inspected on deep co-added SDSS images to search for indications of gravitational distortions. We used the Portsmouth galaxy property computations to study the evolutionary stage of the PSB galaxies and archival multi-wavelength data to search for hidden AGNs. Results: We compiled a catalogue of 2665 PSB galaxies with redshifts z 3 Å and z cloud, in agreement with the idea that PSB galaxies represent the transitioning phase between actively and passively evolving galaxies. The relative frequency of distorted PSB galaxies is at least 57% for EW(Hδ) > 5 Å, significantly higher than in the comparison sample. The search for AGNs based on conventional selection criteria in the radio and MIR results in a low AGN fraction of ∼2-3%. We confirm an MIR excess in the mean SED of

  18. Particle infectivity of HIV-1 full-length genome infectious molecular clones in a subtype C heterosexual transmission pair following high fidelity amplification and unbiased cloning

    Energy Technology Data Exchange (ETDEWEB)

    Deymier, Martin J., E-mail: mdeymie@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Claiborne, Daniel T., E-mail: dclaibo@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Ende, Zachary, E-mail: zende@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Ratner, Hannah K., E-mail: hannah.ratner@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Kilembe, William, E-mail: wkilembe@rzhrg-mail.org [Zambia-Emory HIV Research Project (ZEHRP), B22/737 Mwembelelo, Emmasdale Post Net 412, P/BagE891, Lusaka (Zambia); Allen, Susan, E-mail: sallen5@emory.edu [Zambia-Emory HIV Research Project (ZEHRP), B22/737 Mwembelelo, Emmasdale Post Net 412, P/BagE891, Lusaka (Zambia); Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA (United States); Hunter, Eric, E-mail: eric.hunter2@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA (United States)

    2014-11-15

    The high genetic diversity of HIV-1 impedes high throughput, large-scale sequencing and full-length genome cloning by common restriction enzyme based methods. Applying novel methods that employ a high-fidelity polymerase for amplification and an unbiased fusion-based cloning strategy, we have generated several HIV-1 full-length genome infectious molecular clones from an epidemiologically linked transmission pair. These clones represent the transmitted/founder virus and phylogenetically diverse non-transmitted variants from the chronically infected individual's diverse quasispecies near the time of transmission. We demonstrate that, using this approach, PCR-induced mutations in full-length clones derived from their cognate single genome amplicons are rare. Furthermore, all eight non-transmitted genomes tested produced functional virus with a range of infectivities, belying the previous assumption that a majority of circulating viruses in chronic HIV-1 infection are defective. Thus, these methods provide important tools to update protocols in molecular biology that can be universally applied to the study of human viral pathogens. - Highlights: • Our novel methodology demonstrates accurate amplification and cloning of full-length HIV-1 genomes. • A majority of plasma derived HIV variants from a chronically infected individual are infectious. • The transmitted/founder was more infectious than the majority of the variants from the chronically infected donor.

  19. Particle infectivity of HIV-1 full-length genome infectious molecular clones in a subtype C heterosexual transmission pair following high fidelity amplification and unbiased cloning

    International Nuclear Information System (INIS)

    Deymier, Martin J.; Claiborne, Daniel T.; Ende, Zachary; Ratner, Hannah K.; Kilembe, William; Allen, Susan; Hunter, Eric

    2014-01-01

    The high genetic diversity of HIV-1 impedes high throughput, large-scale sequencing and full-length genome cloning by common restriction enzyme based methods. Applying novel methods that employ a high-fidelity polymerase for amplification and an unbiased fusion-based cloning strategy, we have generated several HIV-1 full-length genome infectious molecular clones from an epidemiologically linked transmission pair. These clones represent the transmitted/founder virus and phylogenetically diverse non-transmitted variants from the chronically infected individual's diverse quasispecies near the time of transmission. We demonstrate that, using this approach, PCR-induced mutations in full-length clones derived from their cognate single genome amplicons are rare. Furthermore, all eight non-transmitted genomes tested produced functional virus with a range of infectivities, belying the previous assumption that a majority of circulating viruses in chronic HIV-1 infection are defective. Thus, these methods provide important tools to update protocols in molecular biology that can be universally applied to the study of human viral pathogens. - Highlights: • Our novel methodology demonstrates accurate amplification and cloning of full-length HIV-1 genomes. • A majority of plasma derived HIV variants from a chronically infected individual are infectious. • The transmitted/founder was more infectious than the majority of the variants from the chronically infected donor

  20. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  1. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  2. On the aspiration characteristics of large-diameter, thin-walled aerosol sampling probes at yaw orientations with respect to the wind

    International Nuclear Information System (INIS)

    Vincent, J.H.; Mark, D.; Smith, T.A.; Stevens, D.C.; Marshall, M.

    1986-01-01

    Experiments were carried out in a large wind tunnel to investigate the aspiration efficiencies of thin-walled aerosol sampling probes of large diameter (up to 50 mm) at orientations with respect to the wind direction ranging from 0 to 180 degrees. Sampling conditions ranged from sub-to super-isokinetic. The experiments employed test dusts of close-graded fused alumina and were conducted under conditions of controlled freestream turbulence. For orientations up to and including 90 degrees, the results were qualitatively and quantitatively consistent with a new physical model which takes account of the fact that the sampled air not only diverges or converges (depending on the relationship between wind speed and sampling velocity) but also turns to pass through the plane of the sampling orifice. The previously published results of Durham and Lundgren (1980) and Davies and Subari (1982) for smaller probes were also in good agreement with the new model. The model breaks down, however, for orientations greater than 90 degrees due to the increasing effect of particle impaction onto the blunt leading edge of the probe body. For the probe facing directly away from the wind (180 degree orientation), aspiration efficiency is dominated almost entirely by this effect. (author)

  3. Characterisation of large zooplankton sampled with two different gears during midwinter in Rijpfjorden, Svalbard

    Directory of Open Access Journals (Sweden)

    Błachowiak-Samołyk Katarzyna

    2017-12-01

    Full Text Available During a midwinter cruise north of 80°N to Rijpfjorden, Svalbard, the composition and vertical distribution of the zooplankton community were studied using two different samplers 1 a vertically hauled multiple plankton sampler (MPS; mouth area 0.25 m2, mesh size 200 μm and 2 a horizontally towed Methot Isaacs Kidd trawl (MIK; mouth area 3.14 m2, mesh size 1500 μm. Our results revealed substantially higher species diversity (49 taxa than if a single sampler (MPS: 38 taxa, MIK: 28 had been used. The youngest stage present (CIII of Calanus spp. (including C. finmarchicus and C. glacialis was sampled exclusively by the MPS, and the frequency of CIV copepodites in MPS was double that than in MIK samples. In contrast, catches of the CV-CVI copepodites of Calanus spp. were substantially higher in the MIK samples (3-fold and 5-fold higher for adult males and females, respectively. The MIK sampling clearly showed that the highest abundances of all three Thysanoessa spp. were in the upper layers, although there was a tendency for the larger-sized euphausiids to occur deeper. Consistent patterns for the vertical distributions of the large zooplankters (e.g. ctenophores, euphausiids collected by the MPS and MIK samplers provided more complete data on their abundances and sizes than obtained by the single net. Possible mechanisms contributing to the observed patterns of distribution, e.g. high abundances of both Calanus spp. and their predators (ctenophores and chaetognaths in the upper water layers during midwinter are discussed.

  4. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  5. Pattern transfer on large samples using a sub-aperture reactive ion beam

    Energy Technology Data Exchange (ETDEWEB)

    Miessler, Andre; Mill, Agnes; Gerlach, Juergen W.; Arnold, Thomas [Leibniz-Institut fuer Oberflaechenmodifizierung (IOM), Permoserstrasse 15, D-04318 Leipzig (Germany)

    2011-07-01

    In comparison to sole Ar ion beam sputtering Reactive Ion Beam Etching (RIBE) reveals the main advantage of increasing the selectivity for different kind of materials due to chemical contributions during the material removal. Therefore RIBE is qualified to be an excellent candidate for pattern transfer applications. The goal of the present study is to apply a sub-aperture reactive ion beam for pattern transfer on large fused silica samples. Concerning this matter, the etching behavior in the ion beam periphery plays a decisive role. Using CF{sub 4} as reactive gas, XPS measurements of the modified surface exposes impurities like Ni, Fe and Cr, which belongs to chemically eroded material of the plasma pot as well as an accumulation of carbon (up to 40 atomic percent) in the beam periphery, respectively. The substitution of CF{sub 4} by NF{sub 3} as reactive gas reveals a lot of benefits: more stable ion beam conditions in combination with a reduction of the beam size down to a diameter of 5 mm and a reduced amount of the Ni, Fe and Cr contaminations. However, a layer formation of silicon nitride handicaps the chemical contribution of the etching process. These negative side effects influence the transfer of trench structures on quartz by changing the selectivity due to altered chemical reaction of the modified resist layer. Concerning this we investigate the pattern transfer on large fused silica plates using NF{sub 3}-sub-aperture RIBE.

  6. Estimation of absolute microglial cell numbers in mouse fascia dentata using unbiased and efficient stereological cell counting principles

    DEFF Research Database (Denmark)

    Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente

    2003-01-01

    Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected...... of microglia, although with this thickness, the intensity of the staining is too high to distinguish single cells. Lectin histochemistry does not visualize microglia throughout the section and, accordingly, is not suited for the optical fractionator. The mean total number of Mac-1+ microglial cells...... in the unilateral dentate gyrus of the normal young adult male C57BL/6 mouse was estimated to be 12,300 (coefficient of variation (CV)=0.13) with a mean coefficient of error (CE) of 0.06. The perspective of estimating microglial cell numbers using stereology is to establish a solid basis for studying the dynamics...

  7. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  8. Averaging and sampling for magnetic-observatory hourly data

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2010-11-01

    Full Text Available A time and frequency-domain analysis is made of the effects of averaging and sampling methods used for constructing magnetic-observatory hourly data values. Using 1-min data as a proxy for continuous, geomagnetic variation, we construct synthetic hourly values of two standard types: instantaneous "spot" measurements and simple 1-h "boxcar" averages. We compare these average-sample types with others: 2-h average, Gaussian, and "brick-wall" low-frequency-pass. Hourly spot measurements provide a statistically unbiased representation of the amplitude range of geomagnetic-field variation, but as a representation of continuous field variation over time, they are significantly affected by aliasing, especially at high latitudes. The 1-h, 2-h, and Gaussian average-samples are affected by a combination of amplitude distortion and aliasing. Brick-wall values are not affected by either amplitude distortion or aliasing, but constructing them is, in an operational setting, relatively more difficult than it is for other average-sample types. It is noteworthy that 1-h average-samples, the present standard for observatory hourly data, have properties similar to Gaussian average-samples that have been optimized for a minimum residual sum of amplitude distortion and aliasing. For 1-h average-samples from medium and low-latitude observatories, the average of the combination of amplitude distortion and aliasing is less than the 5.0 nT accuracy standard established by Intermagnet for modern 1-min data. For medium and low-latitude observatories, average differences between monthly means constructed from 1-min data and monthly means constructed from any of the hourly average-sample types considered here are less than the 1.0 nT resolution of standard databases. We recommend that observatories and World Data Centers continue the standard practice of reporting simple 1-h-average hourly values.

  9. Association between time perspective and organic food consumption in a large sample of adults.

    Science.gov (United States)

    Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine

    2018-01-05

    Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of

  10. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  11. Solid-Phase Extraction and Large-Volume Sample Stacking-Capillary Electrophoresis for Determination of Tetracycline Residues in Milk

    Directory of Open Access Journals (Sweden)

    Gabriela Islas

    2018-01-01

    Full Text Available Solid-phase extraction in combination with large-volume sample stacking-capillary electrophoresis (SPE-LVSS-CE was applied to measure chlortetracycline, doxycycline, oxytetracycline, and tetracycline in milk samples. Under optimal conditions, the proposed method had a linear range of 29 to 200 µg·L−1, with limits of detection ranging from 18.6 to 23.8 µg·L−1 with inter- and intraday repeatabilities < 10% (as a relative standard deviation in all cases. The enrichment factors obtained were from 50.33 to 70.85 for all the TCs compared with a conventional capillary zone electrophoresis (CZE. This method is adequate to analyze tetracyclines below the most restrictive established maximum residue limits. The proposed method was employed in the analysis of 15 milk samples from different brands. Two of the tested samples were positive for the presence of oxytetracycline with concentrations of 95 and 126 µg·L−1. SPE-LVSS-CE is a robust, easy, and efficient strategy for online preconcentration of tetracycline residues in complex matrices.

  12. Comparison of Proteins in Whole Blood and Dried Blood Spot Samples by LC/MS/MS

    Science.gov (United States)

    Chambers, Andrew G.; Percy, Andrew J.; Hardie, Darryl B.; Borchers, Christoph H.

    2013-09-01

    Dried blood spot (DBS) sampling methods are desirable for population-wide biomarker screening programs because of their ease of collection, transportation, and storage. Immunoassays are traditionally used to quantify endogenous proteins in these samples but require a separate assay for each protein. Recently, targeted mass spectrometry (MS) has been proposed for generating highly-multiplexed assays for biomarker proteins in DBS samples. In this work, we report the first comparison of proteins in whole blood and DBS samples using an untargeted MS approach. The average number of proteins identified in undepleted whole blood and DBS samples by liquid chromatography (LC)/MS/MS was 223 and 253, respectively. Protein identification repeatability was between 77 %-92 % within replicates and the majority of these repeated proteins (70 %) were observed in both sample formats. Proteins exclusively identified in the liquid or dried fluid spot format were unbiased based on their molecular weight, isoelectric point, aliphatic index, and grand average hydrophobicity. In addition, we extended this comparison to include proteins in matching plasma and serum samples with their dried fluid spot equivalents, dried plasma spot (DPS), and dried serum spot (DSS). This work begins to define the accessibility of endogenous proteins in dried fluid spot samples for analysis by MS and is useful in evaluating the scope of this new approach.

  13. A framework for inference about carnivore density from unstructured spatial sampling of scat using detector dogs

    Science.gov (United States)

    Thompson, Craig M.; Royle, J. Andrew; Garner, James D.

    2012-01-01

    Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.

  14. Specific Antibodies Reacting with SV40 Large T Antigen Mimotopes in Serum Samples of Healthy Subjects.

    Directory of Open Access Journals (Sweden)

    Mauro Tognon

    Full Text Available Simian Virus 40, experimentally assayed in vitro in different animal and human cells and in vivo in rodents, was classified as a small DNA tumor virus. In previous studies, many groups identified Simian Virus 40 sequences in healthy individuals and cancer patients using PCR techniques, whereas others failed to detect the viral sequences in human specimens. These conflicting results prompted us to develop a novel indirect ELISA with synthetic peptides, mimicking Simian Virus 40 capsid viral protein antigens, named mimotopes. This immunologic assay allowed us to investigate the presence of serum antibodies against Simian Virus 40 and to verify whether Simian Virus 40 is circulating in humans. In this investigation two mimotopes from Simian Virus 40 large T antigen, the viral replication protein and oncoprotein, were employed to analyze for specific reactions to human sera antibodies. This indirect ELISA with synthetic peptides from Simian Virus 40 large T antigen was used to assay a new collection of serum samples from healthy subjects. This novel assay revealed that serum antibodies against Simian Virus 40 large T antigen mimotopes are detectable, at low titer, in healthy subjects aged from 18-65 years old. The overall prevalence of reactivity with the two Simian Virus 40 large T antigen peptides was 20%. This new ELISA with two mimotopes of the early viral regions is able to detect in a specific manner Simian Virus 40 large T antigen-antibody responses.

  15. Intramolecular Hydroamination of Unbiased and Functionalized Primary Aminoalkenes Catalyzed by a Rhodium Aminophosphine Complex

    Science.gov (United States)

    Julian, Lisa D.; Hartwig, John F.

    2010-01-01

    We report a rhodium catalyst that exhibits high reactivity for the hydroamination of primary aminoalkenes that are unbiased toward cyclization and that possess functional groups that would not be tolerated in hydroaminations catalyzed by more electrophilic systems. This catalyst contains an unusual diaminophosphine ligand that binds to rhodium in a κ3-P,O,P mode. The reactions catalyzed by this complex typically proceed at mild temperatures (room temperature to 70 °C), occur with primary aminoalkenes lacking substituents on the alkyl chain that bias the system toward cyclization, occur with primary aminoalkenes containing chloride, ester, ether, enolizable ketone, nitrile, and unprotected alcohol functionality, and occur with primary aminoalkenes containing internal olefins. Mechanistic data imply that these reactions occur with a turnover-limiting step that is different from that of reactions catalyzed by late transition metal complexes of Pd, Pt, and Ir. This change in the turnover-limiting step and resulting high activity of the catalyst stem from favorable relative rates for protonolysis of the M-C bond to release the hydroamination product vs reversion of the aminoalkyl intermediate to regenerate the acyclic precursor. Probes for the origin of the reactivity of the rhodium complex of L1 imply that the aminophosphine groups lead to these favorable rates by effects beyond steric demands and simple electron donation to the metal center. PMID:20839807

  16. Unbiased estimation of the liver volume by the Cavalieri principle using magnetic resonance images

    International Nuclear Information System (INIS)

    Sahin, Buenyamin; Emirzeoglu, Mehmet; Uzun, Ahmet; Incesu, Luetfi; Bek, Yueksel; Bilgic, Sait; Kaplan, Sueleyman

    2003-01-01

    Objective: It is often useful to know the exact volume of the liver, such as in monitoring the effects of a disease, treatment, dieting regime, training program or surgical application. Some non-invasive methodologies have been previously described which estimate the volume of the liver. However, these preliminary techniques need special software or skilled performers and they are not ideal for daily use in clinical practice. Here, we describe a simple, accurate and practical technique for estimating liver volume without changing the routine magnetic resonance imaging scanning procedure. Materials and methods: In this study, five normal livers, obtained from cadavers, were scanned by 0.5 T MR machine, in horizontal and sagittal planes. The consecutive sections, in 10 mm thickness, were used to estimate the whole volume of the liver by means of the Cavalieri principle. The volume estimations were done by three different performers to evaluate the reproducibility. Results: There are no statistical differences between the performers and real liver volumes (P>0.05). There is also high correlation between the estimates of performers and the real liver volume (r=0.993). Conclusion: We conclude that the combination of MR imaging with the Cavalieri principle is a non-invasive, direct and unbiased technique that can be safely applied to estimate liver volume with a very moderate workload per individual

  17. Assessing the validity of single-item life satisfaction measures: results from three large samples.

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E

    2014-12-01

    The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS)-a more psychometrically established measure. Two large samples from Washington (N = 13,064) and Oregon (N = 2,277) recruited by the Behavioral Risk Factor Surveillance System and a representative German sample (N = 1,312) recruited by the Germany Socio-Economic Panel were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62-0.64; disattenuated r = 0.78-0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001-0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS was very small (average absolute difference = 0.015-0.042). Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use.

  18. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    Science.gov (United States)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  19. A simulative comparison of respondent driven sampling with incentivized snowball sampling--the "strudel effect".

    Science.gov (United States)

    Gyarmathy, V Anna; Johnston, Lisa G; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A

    2014-02-01

    Respondent driven sampling (RDS) and incentivized snowball sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania ("original sample") to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1-12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called "strudel effect" is discussed in the paper. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Groundwater-quality data in 12 GAMA study units: Results from the 2006–10 initial sampling period and the 2008–13 trend sampling period, California GAMA Priority Basin Project

    Science.gov (United States)

    Mathany, Timothy M.

    2017-03-09

    The Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) program was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey in cooperation with the California State Water Resources Control Board. From 2004 through 2012, the GAMA-PBP collected samples and assessed the quality of groundwater resources that supply public drinking water in 35 study units across the State. Selected sites in each study unit were sampled again approximately 3 years after initial sampling as part of an assessment of temporal trends in water quality by the GAMA-PBP. Twelve of the study units, initially sampled during 2006–11 (initial sampling period) and sampled a second time during 2008–13 (trend sampling period) to assess temporal trends, are the subject of this report.The initial sampling was designed to provide a spatially unbiased assessment of the quality of untreated groundwater used for public water supplies in the 12 study units. In these study units, 550 sampling sites were selected by using a spatially distributed, randomized, grid-based method to provide spatially unbiased representation of the areas assessed (grid sites, also called “status sites”). After the initial sampling period, 76 of the previously sampled status sites (approximately 10 percent in each study unit) were randomly selected for trend sampling (“trend sites”). The 12 study units sampled both during the initial sampling and during the trend sampling period were distributed among 6 hydrogeologic provinces: Coastal (Northern and Southern), Transverse Ranges and Selected Peninsular Ranges, Klamath, Modoc Plateau and Cascades, and Sierra Nevada Hydrogeologic Provinces. For the purposes of this trend report, the six hydrogeologic provinces were grouped into two hydrogeologic regions based on location: Coastal and Mountain.The groundwater samples were analyzed for a number of synthetic organic

  1. Characterizing the zenithal night sky brightness in large territories: how many samples per square kilometre are needed?

    Science.gov (United States)

    Bará, Salvador

    2018-01-01

    A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.

  2. Detecting the Land-Cover Changes Induced by Large-Physical Disturbances Using Landscape Metrics, Spatial Sampling, Simulation and Spatial Analysis

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2009-08-01

    Full Text Available The objectives of the study are to integrate the conditional Latin Hypercube Sampling (cLHS, sequential Gaussian simulation (SGS and spatial analysis in remotely sensed images, to monitor the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial heterogeneity and variability. The multiple NDVI images demonstrate that spatial patterns of disturbed landscapes were successfully delineated by spatial analysis such as variogram, Moran’I and landscape metrics in the study area. The hybrid method delineates the spatial patterns and spatial variability of landscapes caused by these large disturbances. The cLHS approach is applied to select samples from Normalized Difference Vegetation Index (NDVI images from SPOT HRV images in the Chenyulan watershed of Taiwan, and then SGS with sufficient samples is used to generate maps of NDVI images. In final, the NDVI simulated maps are verified using indexes such as the correlation coefficient and mean absolute error (MAE. Therefore, the statistics and spatial structures of multiple NDVI images present a very robust behavior, which advocates the use of the index for the quantification of the landscape spatial patterns and land cover change. In addition, the results transferred by Open Geospatial techniques can be accessed from web-based and end-user applications of the watershed management.

  3. Methodology for Quantitative Analysis of Large Liquid Samples with Prompt Gamma Neutron Activation Analysis using Am-Be Source

    International Nuclear Information System (INIS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.

    2009-01-01

    An optimized set-up for prompt gamma neutron activation analysis (PGNAA) with Am-Be source is described and used for large liquid samples analysis. A methodology for quantitative analysis is proposed: it consists on normalizing the prompt gamma count rates with thermal neutron flux measurements carried out with He-3 detector and gamma attenuation factors calculated using MCNP-5. The relative and absolute methods are considered. This methodology is then applied to the determination of cadmium in industrial phosphoric acid. The same sample is then analyzed by inductively coupled plasma (ICP) method. Our results are in good agreement with those obtained with ICP method.

  4. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  5. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  6. Gasoline prices, gasoline consumption, and new-vehicle fuel economy: Evidence for a large sample of countries

    International Nuclear Information System (INIS)

    Burke, Paul J.; Nishitateno, Shuhei

    2013-01-01

    Countries differ considerably in terms of the price drivers pay for gasoline. This paper uses data for 132 countries for the period 1995–2008 to investigate the implications of these differences for the consumption of gasoline for road transport. To address the potential for simultaneity bias, we use both a country's oil reserves and the international crude oil price as instruments for a country's average gasoline pump price. We obtain estimates of the long-run price elasticity of gasoline demand of between − 0.2 and − 0.5. Using newly available data for a sub-sample of 43 countries, we also find that higher gasoline prices induce consumers to substitute to vehicles that are more fuel-efficient, with an estimated elasticity of + 0.2. Despite the small size of our elasticity estimates, there is considerable scope for low-price countries to achieve gasoline savings and vehicle fuel economy improvements via reducing gasoline subsidies and/or increasing gasoline taxes. - Highlights: ► We estimate the determinants of gasoline demand and new-vehicle fuel economy. ► Estimates are for a large sample of countries for the period 1995–2008. ► We instrument for gasoline prices using oil reserves and the world crude oil price. ► Gasoline demand and fuel economy are inelastic with respect to the gasoline price. ► Large energy efficiency gains are possible via higher gasoline prices

  7. Spatio-temporal foreshock activity during stick-slip experiments of large rock samples

    Science.gov (United States)

    Tsujimura, Y.; Kawakata, H.; Fukuyama, E.; Yamashita, F.; Xu, S.; Mizoguchi, K.; Takizawa, S.; Hirano, S.

    2016-12-01

    Foreshock activity has sometimes been reported for large earthquakes, and has been roughly classified into the following two classes. For shallow intraplate earthquakes, foreshocks occurred in the vicinity of the mainshock hypocenter (e.g., Doi and Kawakata, 2012; 2013). And for intraplate subduction earthquakes, foreshock hypocenters migrated toward the mainshock hypocenter (Kato, et al., 2012; Yagi et al., 2014). To understand how foreshocks occur, it is useful to investigate the spatio-temporal activities of foreshocks in the laboratory experiments under controlled conditions. We have conducted stick-slip experiments by using a large-scale biaxial friction apparatus at NIED in Japan (e.g., Fukuyama et al., 2014). Our previous results showed that stick-slip events repeatedly occurred in a run, but only those later events were preceded by foreshocks. Kawakata et al. (2014) inferred that the gouge generated during the run was an important key for foreshock occurrence. In this study, we proceeded to carry out stick-slip experiments of large rock samples whose interface (fault plane) is 1.5 meter long and 0.5 meter wide. After some runs to generate fault gouge between the interface. In the current experiments, we investigated spatio-temporal activities of foreshocks. We detected foreshocks from waveform records of 3D array of piezo-electric sensors. Our new results showed that more than three foreshocks (typically about twenty) had occurred during each stick-slip event, in contrast to the few foreshocks observed during previous experiments without pre-existing gouge. Next, we estimated the hypocenter locations of the stick-slip events, and found that they were located near the opposite end to the loading point. In addition, we observed a migration of foreshock hypocenters toward the hypocenter of each stick-slip event. This suggests that the foreshock activity observed in our current experiments was similar to that for the interplate earthquakes in terms of the

  8. Heritability of psoriasis in a large twin sample

    DEFF Research Database (Denmark)

    Lønnberg, Ann Sophie; Skov, Liselotte; Skytthe, A

    2013-01-01

    AIM: To study the concordance of psoriasis in a population-based twin sample. METHODS: Data on psoriasis in 10,725 twin pairs, 20-71 years of age, from the Danish Twin Registry was collected via a questionnaire survey. The concordance and heritability of psoriasis were estimated. RESULTS: In total...

  9. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  10. Oxalic acid as a liquid dosimeter for absorbed dose measurement in large-scale of sample solution

    International Nuclear Information System (INIS)

    Biramontri, S.; Dechburam, S.; Vitittheeranon, A.; Wanitsuksombut, W.; Thongmitr, W.

    1999-01-01

    This study shows the feasibility for, applying 2.5 mM aqueous oxalic acid solution using spectrophotometric analysis method for absorbed dose measurement from 1 to 10 kGy in a large-scale of sample solution. The optimum wavelength of 220 nm was selected. The stability of the response of the dosimeter over 25 days was better than 1 % for unirradiated and ± 2% for irradiated solution. The reproducibility in the same batch was within 1%. The variation of the dosimeter response between batches was also studied. (author)

  11. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  12. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    International Nuclear Information System (INIS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-01-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)

  13. The perspectives of clinical staff and bereaved informal care-givers on the use of continuous sedation until death for cancer patients: The study protocol of the UNBIASED study

    Science.gov (United States)

    2011-01-01

    Background A significant minority of dying people experience refractory symptoms or extreme distress unresponsive to conventional therapies. In such circumstances, sedation may be used to decrease or remove consciousness until death occurs. This practice is described in a variety of ways, including: 'palliative sedation', 'terminal sedation', 'continuous deep sedation until death', 'proportionate sedation' or 'palliative sedation to unconsciousness'. Surveys show large unexplained variation in incidence of sedation at the end of life across countries and care settings and there are ethical concerns about the use, intentions, risks and significance of the practice in palliative care. There are also questions about how to explain international variation in the use of the practice. This protocol relates to the UNBIASED study (UK Netherlands Belgium International Sedation Study), which comprises three linked studies with separate funding sources in the UK, Belgium and the Netherlands. The aims of the study are to explore decision-making surrounding the application of continuous sedation until death in contemporary clinical practice, and to understand the experiences of clinical staff and decedents' informal care-givers of the use of continuous sedation until death and their perceptions of its contribution to the dying process. The UNBIASED study is part of the European Association for Palliative Care Research Network. Methods/Design To realize the study aims, a two-phase study has been designed. The study settings include: the domestic home, hospital and expert palliative care sites. Phase 1 consists of: a) focus groups with health care staff and bereaved informal care-givers; and b) a preliminary case notes review to study the range of sedation therapy provided at the end of life to cancer patients who died within a 12 week period. Phase 2 employs qualitative methods to develop 30 patient-centred case studies in each country. These involve interviews with staff and

  14. Transitions in pregnancy planning in women recruited for a large prospective cohort study.

    Science.gov (United States)

    Luderer, U; Li, T; Fine, J P; Hamman, R F; Stanford, J B; Baker, D

    2017-06-01

    large numbers of women are required to recruit an unbiased sample of preconception women. These findings will be useful to investigators designing prospective studies of fecundability, pregnancy outcomes and children's health. National Institutes of Health (contracts N01-HD53414, N01-HD63416, N01-HD53410, N01-HD53415, N01-HD53396, N01-HD53413 and N01-HD-53411; grant R21 ES016846) and by the University of California Irvine Center for Occupational and Environmental Health. No competing interests. None. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  15. Implicit and explicit anti-fat bias among a large sample of medical doctors by BMI, race/ethnicity and gender.

    Directory of Open Access Journals (Sweden)

    Janice A Sabin

    Full Text Available Overweight patients report weight discrimination in health care settings and subsequent avoidance of routine preventive health care. The purpose of this study was to examine implicit and explicit attitudes about weight among a large group of medical doctors (MDs to determine the pervasiveness of negative attitudes about weight among MDs. Test-takers voluntarily accessed a public Web site, known as Project Implicit®, and opted to complete the Weight Implicit Association Test (IAT (N = 359,261. A sub-sample identified their highest level of education as MD (N = 2,284. Among the MDs, 55% were female, 78% reported their race as white, and 62% had a normal range BMI. This large sample of test-takers showed strong implicit anti-fat bias (Cohen's d = 1.0. MDs, on average, also showed strong implicit anti-fat bias (Cohen's d = 0.93. All test-takers and the MD sub-sample reported a strong preference for thin people rather than fat people or a strong explicit anti-fat bias. We conclude that strong implicit and explicit anti-fat bias is as pervasive among MDs as it is among the general public. An important area for future research is to investigate the association between providers' implicit and explicit attitudes about weight, patient reports of weight discrimination in health care, and quality of care delivered to overweight patients.

  16. Lack of association between digit ratio (2D:4D) and assertiveness: replication in a large sample.

    Science.gov (United States)

    Voracek, Martin

    2009-12-01

    Findings regarding within-sex associations of digit ratio (2D:4D), a putative pointer to long-lasting effects of prenatal androgen action, and sexually differentiated personality traits have generally been inconsistent or unreplicable, suggesting that effects in this domain, if any, are likely small. In contrast to evidence from Wilson's important 1983 study, a forerunner of modern 2D:4D research, two recent studies in 2005 and 2008 by Freeman, et al. and Hampson, et al. showed assertiveness, a presumably male-typed personality trait, was not associated with 2D:4D; however, these studies were clearly statistically underpowered. Hence this study examined this question anew, based on a large sample of 491 men and 627 women. Assertiveness was only modestly sexually differentiated, favoring men, and a positive correlate of age and education and a negative correlate of weight and Body Mass Index among women, but not men. Replicating the two prior studies, 2D:4D was throughout unrelated to assertiveness scores. This null finding was preserved with controls for correlates of assertiveness, also in nonparametric analysis and with tests for curvilinear relations. Discussed are implications of this specific null finding, now replicated in a large sample, for studies of 2D:4D and personality in general and novel research approaches to proceed in this field.

  17. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  18. Unbiased quantitative testing of conventional orthodontic beliefs.

    Science.gov (United States)

    Baumrind, S

    1998-03-01

    This study used a preexisting database to test in hypothesis from the appropriateness of some common orthodontic beliefs concerning upper first molar displacement and changes in facial morphology associated with conventional full bonded/banded treatment in growing subjects. In an initial pass, the author used data from a stratified random sample of 48 subjects drawn retrospectively from the practice of a single, experienced orthodontist. This sample consisted of 4 subgroups of 12 subjects each: Class I nonextraction, Class I extraction, Class II nonextraction, and Class II extraction. The findings indicate that, relative to the facial profile, chin point did not, on average, displace anteriorly during treatment, either overall or in any subgroup. Relative to the facial profile, Point A became significantly less prominent during treatment, both overall and in each subgroup. The best estimate of the mean displacement of the upper molar cusp relative to superimposition on Anterior Cranial Base was in the mesial direction in each of the four subgroups. In only one extraction subject out of 24 did the cusp appear to be displaced distally. Mesial molar cusp displacement was significantly greater in the Class II extraction subgroup than in the Class II nonextraction subgroup. Relative to superimposition on anatomical "best fit" of maxillary structures, the findings for molar cusp displacement were similar, but even more dramatic. Mean mesial migration was highly significant in both the Class II nonextraction and Class II extraction subgroups. In no subject in the entire sample was distal displacement noted relative to this superimposition. Mean increase in anterior Total Face Height was significantly greater in the Class II extraction subgroup than in the Class II nonextraction subgroup. (This finding was contrary to the author's original expectation.) The generalizability of the findings from the initial pass to other treated growing subjects was then assessed by

  19. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  20. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  1. An examination of smoking behavior and opinions about smoke-free environments in a large sample of sexual and gender minority community members.

    Science.gov (United States)

    McElroy, Jane A; Everett, Kevin D; Zaniletti, Isabella

    2011-06-01

    The purpose of this study is to more completely quantify smoking rate and support for smoke-free policies in private and public environments from a large sample of self-identified sexual and gender minority (SGM) populations. A targeted sampling strategy recruited participants from 4 Missouri Pride Festivals and online surveys targeted to SGM populations during the summer of 2008. A 24-item survey gathered information on gender and sexual orientation, smoking status, and questions assessing behaviors and preferences related to smoke-free policies. The project recruited participants through Pride Festivals (n = 2,676) and Web-based surveys (n = 231) representing numerous sexual and gender orientations and the racial composite of the state of Missouri. Differences were found between the Pride Festivals sample and the Web-based sample, including smoking rates, with current smoking for the Web-based sample (22%) significantly less than the Pride Festivals sample (37%; p times more likely to be current smokers compared with the study's heterosexual group (n = 436; p = .005). Statistically fewer SGM racial minorities (33%) are current smokers compared with SGM Whites (37%; p = .04). Support and preferences for public and private smoke-free environments were generally low in the SGM population. The strategic targeting method achieved a large and diverse sample. The findings of high rates of smoking coupled with generally low levels of support for smoke-free public policies in the SGM community highlight the need for additional research to inform programmatic attempts to reduce tobacco use and increase support for smoke-free environments.

  2. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  3. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  4. CASP10-BCL::Fold efficiently samples topologies of large proteins.

    Science.gov (United States)

    Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens

    2015-03-01

    During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.

  5. Early-branching Gut Fungi Possess A Large, And Comprehensive Array Of Biomass-Degrading Enzymes

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Kevin V.; Haitjema, Charles; Henske, John K.; Gilmore, Sean P.; Borges-Rivera, Diego; Lipzen, Anna; Brewer, Heather M.; Purvine, Samuel O.; Wright, Aaron T.; Theodorou, Michael K.; Grigoriev, Igor V.; Regev, Aviv; Thompson, Dawn; O' Malley, Michelle A.

    2016-03-11

    The fungal kingdom is the source of almost all industrial enzymes in use for lignocellulose bioprocessing. Its more primitive members, however, remain relatively unexploited. We developed a systems-level approach that integrates RNA-Seq, proteomics, phenotype and biochemical studies of relatively unexplored early-branching free-living fungi. Anaerobic gut fungi isolated from herbivores produce a large array of biomass-degrading enzymes that synergistically degrade crude, unpretreated plant biomass, and are competitive with optimized commercial preparations from Aspergillus and Trichoderma. Compared to these model platforms, gut fungal enzymes are unbiased in substrate preference due to a wealth of xylan-degrading enzymes. These enzymes are universally catabolite repressed, and are further regulated by a rich landscape of noncoding regulatory RNAs. Furthermore, we identified several promising sequence divergent enzyme candidates for lignocellulosic bioprocessing.

  6. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  7. Toward Rapid Unattended X-ray Tomography of Large Planar Samples at 50-nm Resolution

    International Nuclear Information System (INIS)

    Rudati, J.; Tkachuk, A.; Gelb, J.; Hsu, G.; Feng, Y.; Pastrick, R.; Lyon, A.; Trapp, D.; Beetz, T.; Chen, S.; Hornberger, B.; Seshadri, S.; Kamath, S.; Zeng, X.; Feser, M.; Yun, W.; Pianetta, P.; Andrews, J.; Brennan, S.; Chu, Y. S.

    2009-01-01

    X-ray tomography at sub-50 nm resolution of small areas (∼15 μmx15 μm) are routinely performed with both laboratory and synchrotron sources. Optics and detectors for laboratory systems have been optimized to approach the theoretical efficiency limit. Limited by the availability of relatively low-brightness laboratory X-ray sources, exposure times for 3-D data sets at 50 nm resolution are still many hours up to a full day. However, for bright synchrotron sources, the use of these optimized imaging systems results in extremely short exposure times, approaching live-camera speeds at the Advanced Photon Source at Argonne National Laboratory near Chicago in the US These speeds make it possible to acquire a full tomographic dataset at 50 nm resolution in less than a minute of true X-ray exposure time. However, limits in the control and positioning system lead to large overhead that results in typical exposure times of ∼15 min currently.We present our work on the reduction and elimination of system overhead and toward complete automation of the data acquisition process. The enhancements underway are primarily to boost the scanning rate, sample positioning speed, and illumination homogeneity to performance levels necessary for unattended tomography of large areas (many mm 2 in size). We present first results on this ongoing project.

  8. Sampling in ecology and evolution - bridging the gap between theory and practice

    Science.gov (United States)

    Albert, C.H.; Yoccoz, N.G.; Edwards, T.C.; Graham, C.H.; Zimmermann, N.E.; Thuiller, W.

    2010-01-01

    Sampling is a key issue for answering most ecological and evolutionary questions. The importance of developing a rigorous sampling design tailored to specific questions has already been discussed in the ecological and sampling literature and has provided useful tools and recommendations to sample and analyse ecological data. However, sampling issues are often difficult to overcome in ecological studies due to apparent inconsistencies between theory and practice, often leading to the implementation of simplified sampling designs that suffer from unknown biases. Moreover, we believe that classical sampling principles which are based on estimation of means and variances are insufficient to fully address many ecological questions that rely on estimating relationships between a response and a set of predictor variables over time and space. Our objective is thus to highlight the importance of selecting an appropriate sampling space and an appropriate sampling design. We also emphasize the importance of using prior knowledge of the study system to estimate models or complex parameters and thus better understand ecological patterns and processes generating these patterns. Using a semi-virtual simulation study as an illustration we reveal how the selection of the space (e.g. geographic, climatic), in which the sampling is designed, influences the patterns that can be ultimately detected. We also demonstrate the inefficiency of common sampling designs to reveal response curves between ecological variables and climatic gradients. Further, we show that response-surface methodology, which has rarely been used in ecology, is much more efficient than more traditional methods. Finally, we discuss the use of prior knowledge, simulation studies and model-based designs in defining appropriate sampling designs. We conclude by a call for development of methods to unbiasedly estimate nonlinear ecologically relevant parameters, in order to make inferences while fulfilling requirements of

  9. A simulative comparison of respondent driven sampling with incentivized snowball sampling – the “strudel effect”

    Science.gov (United States)

    Gyarmathy, V. Anna; Johnston, Lisa G.; Caplinskiene, Irma; Caplinskas, Saulius; Latkin, Carl A.

    2014-01-01

    Background Respondent driven sampling (RDS) and Incentivized Snowball Sampling (ISS) are two sampling methods that are commonly used to reach people who inject drugs (PWID). Methods We generated a set of simulated RDS samples on an actual sociometric ISS sample of PWID in Vilnius, Lithuania (“original sample”) to assess if the simulated RDS estimates were statistically significantly different from the original ISS sample prevalences for HIV (9.8%), Hepatitis A (43.6%), Hepatitis B (Anti-HBc 43.9% and HBsAg 3.4%), Hepatitis C (87.5%), syphilis (6.8%) and Chlamydia (8.8%) infections and for selected behavioral risk characteristics. Results The original sample consisted of a large component of 249 people (83% of the sample) and 13 smaller components with 1 to 12 individuals. Generally, as long as all seeds were recruited from the large component of the original sample, the simulation samples simply recreated the large component. There were no significant differences between the large component and the entire original sample for the characteristics of interest. Altogether 99.2% of 360 simulation sample point estimates were within the confidence interval of the original prevalence values for the characteristics of interest. Conclusions When population characteristics are reflected in large network components that dominate the population, RDS and ISS may produce samples that have statistically non-different prevalence values, even though some isolated network components may be under-sampled and/or statistically significantly different from the main groups. This so-called “strudel effect” is discussed in the paper. PMID:24360650

  10. FOREST Unbiased Galactic plane Imaging survey with the Nobeyama 45 m telescope (FUGIN): Molecular clouds toward W 33; possible evidence for a cloud-cloud collision triggering O star formation

    Science.gov (United States)

    Kohno, Mikito; Torii, Kazufumi; Tachihara, Kengo; Umemoto, Tomofumi; Minamidani, Tetsuhiro; Nishimura, Atsushi; Fujita, Shinji; Matsuo, Mitsuhiro; Yamagishi, Mitsuyoshi; Tsuda, Yuya; Kuriki, Mika; Kuno, Nario; Ohama, Akio; Hattori, Yusuke; Sano, Hidetoshi; Yamamoto, Hiroaki; Fukui, Yasuo

    2018-05-01

    We observed molecular clouds in the W 33 high-mass star-forming region associated with compact and extended H II regions using the NANTEN2 telescope as well as the Nobeyama 45 m telescope in the J = 1-0 transitions of 12CO, 13CO, and C18O as part of the FOREST Unbiased Galactic plane Imaging survey with the Nobeyama 45 m telescope (FUGIN) legacy survey. We detected three velocity components at 35 km s-1, 45 km s-1, and 58 km s-1. The 35 km s-1 and 58 km s-1 clouds are likely to be physically associated with W 33 because of the enhanced 12CO J = 3-2 to J = 1-0 intensity ratio as R_3-2/1-0} > 1.0 due to the ultraviolet irradiation by OB stars, and morphological correspondence between the distributions of molecular gas and the infrared and radio continuum emissions excited by high-mass stars. The two clouds show complementary distributions around W 33. The velocity separation is too large to be gravitationally bound, and yet not explained by expanding motion by stellar feedback. Therefore, we discuss whether a cloud-cloud collision scenario likely explains the high-mass star formation in W 33.

  11. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    Science.gov (United States)

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  12. An examination of the RCMAS-2 scores across gender, ethnic background, and age in a large Asian school sample.

    Science.gov (United States)

    Ang, Rebecca P; Lowe, Patricia A; Yusof, Noradlin

    2011-12-01

    The present study investigated the factor structure, reliability, convergent and discriminant validity, and U.S. norms of the Revised Children's Manifest Anxiety Scale, Second Edition (RCMAS-2; C. R. Reynolds & B. O. Richmond, 2008a) scores in a Singapore sample of 1,618 school-age children and adolescents. Although there were small statistically significant differences in the average RCMAS-2 T scores found across various demographic groupings, on the whole, the U.S. norms appear adequate for use in the Asian Singapore sample. Results from item bias analyses suggested that biased items detected had small effects and were counterbalanced across gender and ethnicity, and hence, their relative impact on test score variation appears to be minimal. Results of factor analyses on the RCMAS-2 scores supported the presence of a large general anxiety factor, the Total Anxiety factor, and the 5-factor structure found in U.S. samples was replicated. Both the large general anxiety factor and the 5-factor solution were invariant across gender and ethnic background. Internal consistency estimates ranged from adequate to good, and 2-week test-retest reliability estimates were comparable to previous studies. Evidence providing support for convergent and discriminant validity of the RCMAS-2 scores was also found. Taken together, findings provide additional cross-cultural evidence of the appropriateness and usefulness of the RCMAS-2 as a measure of anxiety in Asian Singaporean school-age children and adolescents.

  13. Investigating sex differences in psychological predictors of snack intake among a large representative sample.

    Science.gov (United States)

    Adriaanse, Marieke A; Evers, Catharine; Verhoeven, Aukje A C; de Ridder, Denise T D

    2016-03-01

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of psychological eating-related variables. A community sample was employed to: (i) determine sex differences in (un)healthy snack consumption and psychological eating-related variables (e.g. emotional eating, intention to eat healthily); (ii) examine whether sex predicts energy intake from (un)healthy snacks over and above psychological variables; and (iii) investigate the relationship between psychological variables and snack intake for men and women separately. Snack consumption was assessed with a 7d snack diary; the psychological eating-related variables with questionnaires. Participants were members of an Internet survey panel that is based on a true probability sample of households in the Netherlands. Men and women (n 1292; 45 % male), with a mean age of 51·23 (sd 16·78) years and a mean BMI of 25·62 (sd 4·75) kg/m2. Results revealed that women consumed more healthy and less unhealthy snacks than men and they scored higher than men on emotional and restrained eating. Women also more often reported appearance and health-related concerns about their eating behaviour, but men and women did not differ with regard to external eating or their intentions to eat more healthily. The relationships between psychological eating-related variables and snack intake were similar for men and women, indicating that snack intake is predicted by the same variables for men and women. It is concluded that some small sex differences in psychological eating-related variables exist, but based on the present data there is no need for interventions aimed at promoting healthy eating to target different predictors according to sex.

  14. Large contribution of human papillomavirus in vaginal neoplastic lesions: a worldwide study in 597 samples.

    Science.gov (United States)

    Alemany, L; Saunier, M; Tinoco, L; Quirós, B; Alvarado-Cabrero, I; Alejo, M; Joura, E A; Maldonado, P; Klaustermeier, J; Salmerón, J; Bergeron, C; Petry, K U; Guimerà, N; Clavero, O; Murillo, R; Clavel, C; Wain, V; Geraets, D T; Jach, R; Cross, P; Carrilho, C; Molina, C; Shin, H R; Mandys, V; Nowakowski, A M; Vidal, A; Lombardi, L; Kitchener, H; Sica, A R; Magaña-León, C; Pawlita, M; Quint, W; Bravo, I G; Muñoz, N; de Sanjosé, S; Bosch, F X

    2014-11-01

    This work describes the human papillomavirus (HPV) prevalence and the HPV type distribution in a large series of vaginal intraepithelial neoplasia (VAIN) grades 2/3 and vaginal cancer worldwide. We analysed 189 VAIN 2/3 and 408 invasive vaginal cancer cases collected from 31 countries from 1986 to 2011. After histopathological evaluation of sectioned formalin-fixed paraffin-embedded samples, HPV DNA detection and typing was performed using the SPF-10/DNA enzyme immunoassay (DEIA)/LiPA25 system (version 1). A subset of 146 vaginal cancers was tested for p16(INK4a) expression, a cellular surrogate marker for HPV transformation. Prevalence ratios were estimated using multivariate Poisson regression with robust variance. HPV DNA was detected in 74% (95% confidence interval (CI): 70-78%) of invasive cancers and in 96% (95% CI: 92-98%) of VAIN 2/3. Among cancers, the highest detection rates were observed in warty-basaloid subtype of squamous cell carcinomas, and in younger ages. Concerning the type-specific distribution, HPV16 was the most frequently type detected in both precancerous and cancerous lesions (59%). p16(INK4a) overexpression was found in 87% of HPV DNA positive vaginal cancer cases. HPV was identified in a large proportion of invasive vaginal cancers and in almost all VAIN 2/3. HPV16 was the most common type detected. A large impact in the reduction of the burden of vaginal neoplastic lesions is expected among vaccinated cohorts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. 99Mo Yield Using Large Sample Mass of MoO3 for Sustainable Production of 99Mo

    Science.gov (United States)

    Tsukada, Kazuaki; Nagai, Yasuki; Hashimoto, Kazuyuki; Kawabata, Masako; Minato, Futoshi; Saeki, Hideya; Motoishi, Shoji; Itoh, Masatoshi

    2018-04-01

    A neutron source from the C(d,n) reaction has the unique capability of producing medical radioisotopes such as 99Mo with a minimum level of radioactive waste. Precise data on the neutron flux are crucial to determine the best conditions for obtaining the maximum yield of 99Mo. The measured yield of 99Mo produced by the 100Mo(n,2n)99Mo reaction from a large sample mass of MoO3 agrees well with the numerical result estimated with the latest neutron data, which are a factor of two larger than the other existing data. This result establishes an important finding for the domestic production of 99Mo: approximately 50% of the demand for 99Mo in Japan could be met using a 100 g 100MoO3 sample mass with a single accelerator of 40 MeV, 2 mA deuteron beams.

  16. Nature vs. Nurture: The influence of OB star environments on proto-planetary disk evolution

    Science.gov (United States)

    Bouwman, Jeroen

    2006-09-01

    We propose a combined IRAC/IRS study of a large, well-defined and unbiased X-ray selected sample of pre-main-sequence stars in three OB associations: Pismis 24 in NGC 6357, NGC 2244 in the Rosette Nebula, and IC 1795 in the W3 complex. The samples are based on recent Chandra X-ray Observatory studies which reliably identify hundreds of cluster members and were carefully chosen to avoid high infrared nebular background. A new Chandra exposure of IC 1795 is requested, and an optical followup to characterise the host stars is planned.

  17. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  18. Astrochemical evolution along star formation: Overview of the IRAM Large Program ASAI

    Science.gov (United States)

    Lefloch, Bertrand; Bachiller, R.; Ceccarelli, C.; Cernicharo, J.; Codella, C.; Fuente, A.; Kahane, C.; López-Sepulcre, A.; Tafalla, M.; Vastel, C.; Caux, E.; González-García, M.; Bianchi, E.; Gómez-Ruiz, A.; Holdship, J.; Mendoza, E.; Ospina-Zamudio, J.; Podio, L.; Quénard, D.; Roueff, E.; Sakai, N.; Viti, S.; Yamamoto, S.; Yoshida, K.; Favre, C.; Monfredini, T.; Quitián-Lara, H. M.; Marcelino, N.; Roberty, H. Boechat; Cabrit, S.

    2018-04-01

    Evidence is mounting that the small bodies of our Solar System, such as comets and asteroids, have at least partially inherited their chemical composition from the first phases of the Solar System formation. It then appears that the molecular complexity of these small bodies is most likely related to the earliest stages of star formation. It is therefore important to characterize and to understand how the chemical evolution changes with solar-type protostellar evolution. We present here the Large Program "Astrochemical Surveys At IRAM" (ASAI). Its goal is to carry out unbiased millimeter line surveys between 80 and 272 GHz of a sample of ten template sources, which fully cover the first stages of the formation process of solar-type stars, from prestellar cores to the late protostellar phase. In this article, we present an overview of the surveys and results obtained from the analysis of the 3 mm band observations. The number of detected main isotopic species barely varies with the evolutionary stage and is found to be very similar to that of massive star-forming regions. The molecular content in O- and C- bearing species allows us to define two chemical classes of envelopes, whose composition is dominated by either a) a rich content in O-rich complex organic molecules, associated with hot corino sources, or b) a rich content in hydrocarbons, typical of Warm Carbon Chain Chemistry sources. Overall, a high chemical richness is found to be present already in the initial phases of solar-type star formation.

  19. Genetic diversity in India and the inference of Eurasian population expansion.

    Science.gov (United States)

    Xing, Jinchuan; Watkins, W Scott; Hu, Ya; Huff, Chad D; Sabo, Aniko; Muzny, Donna M; Bamshad, Michael J; Gibbs, Richard A; Jorde, Lynn B; Yu, Fuli

    2010-01-01

    Genetic studies of populations from the Indian subcontinent are of great interest because of India's large population size, complex demographic history, and unique social structure. Despite recent large-scale efforts in discovering human genetic variation, India's vast reservoir of genetic diversity remains largely unexplored. To analyze an unbiased sample of genetic diversity in India and to investigate human migration history in Eurasia, we resequenced one 100-kb ENCODE region in 92 samples collected from three castes and one tribal group from the state of Andhra Pradesh in south India. Analyses of the four Indian populations, along with eight HapMap populations (692 samples), showed that 30% of all SNPs in the south Indian populations are not seen in HapMap populations. Several Indian populations, such as the Yadava, Mala/Madiga, and Irula, have nucleotide diversity levels as high as those of HapMap African populations. Using unbiased allele-frequency spectra, we investigated the expansion of human populations into Eurasia. The divergence time estimates among the major population groups suggest that Eurasian populations in this study diverged from Africans during the same time frame (approximately 90 to 110 thousand years ago). The divergence among different Eurasian populations occurred more than 40,000 years after their divergence with Africans. Our results show that Indian populations harbor large amounts of genetic variation that have not been surveyed adequately by public SNP discovery efforts. Our data also support a delayed expansion hypothesis in which an ancestral Eurasian founding population remained isolated long after the out-of-Africa diaspora, before expanding throughout Eurasia. © 2010 Xing et al.; licensee BioMed Central Ltd.

  20. Identification of relevant drugable targets in diffuse large B-cell lymphoma using a genome-wide unbiased CD20 guilt-by association approach

    NARCIS (Netherlands)

    de Jong, Mathilde R. W.; Visser, Lydia; Huls, Gerwin; Diepstra, Arjan; van Vugt, Marcel; Ammatuna, Emanuele; van Rijn, Rozemarijn S.; Vellenga, Edo; van den Berg, Anke; Fehrmann, Rudolf S. N.; van Meerten, Tom

    2018-01-01

    Forty percent of patients with diffuse large B-cell lymphoma (DLBCL) show resistant disease to standard chemotherapy (CHOP) in combination with the anti-CD20 monoclonal antibody rituximab (R). Although many new anti-cancer drugs were developed in the last years, it is unclear which of these drugs

  1. Using Co-Occurrence to Evaluate Belief Coherence in a Large Non Clinical Sample

    Science.gov (United States)

    Pechey, Rachel; Halligan, Peter

    2012-01-01

    Much of the recent neuropsychological literature on false beliefs (delusions) has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow “cohere” with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs). The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural). Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian’s coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance. PMID:23155383

  2. Using co-occurrence to evaluate belief coherence in a large non clinical sample.

    Directory of Open Access Journals (Sweden)

    Rachel Pechey

    Full Text Available Much of the recent neuropsychological literature on false beliefs (delusions has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow "cohere" with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs. The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural. Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian's coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance.

  3. Spatial distribution, sampling precision and survey design optimisation with non-normal variables: The case of anchovy (Engraulis encrasicolus) recruitment in Spanish Mediterranean waters

    Science.gov (United States)

    Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan

    2016-02-01

    In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.

  4. Unbiased metal oxide semiconductor ionising radiation dosemeter

    International Nuclear Information System (INIS)

    Kumurdjian, N.; Sarrabayrouse, G.J.

    1995-01-01

    To assess the application of MOS devices as low dose rate dosemeters, the sensitivity is the major factor although little studies have been performed on that subject. It is studied here, as well as thermal stability and linearity of the response curve. Other advantages are specified such as large measurable dose range, low cost, small size, possibility of integration. (D.L.)

  5. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  6. Intralesional and metastatic heterogeneity in malignant melanomas demonstrated by stereologic estimates of nuclear volume

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt; Erlandsen, M

    1990-01-01

    Regional variability of nuclear 3-dimensional size can be estimated objectively using point-sampled intercepts obtained from different, defined zones within individual neoplasms. In the present study, stereologic estimates of the volume-weighted mean nuclear volume, nuclear vv, within peripheral...... melanomas showed large interindividual variation. This finding emphasizes that unbiased estimates of nuclear vv are robust to regional heterogeneity of nuclear volume and thus suitable for purposes of objective, quantitative malignancy grading of melanomas....

  7. The role of fire in UK peatland and moorland management: the need for informed, unbiased debate.

    Science.gov (United States)

    Davies, G Matt; Kettridge, Nicholas; Stoof, Cathelijne R; Gray, Alan; Ascoli, Davide; Fernandes, Paulo M; Marrs, Rob; Allen, Katherine A; Doerr, Stefan H; Clay, Gareth D; McMorrow, Julia; Vandvik, Vigdis

    2016-06-05

    Fire has been used for centuries to generate and manage some of the UK's cultural landscapes. Despite its complex role in the ecology of UK peatlands and moorlands, there has been a trend of simplifying the narrative around burning to present it as an only ecologically damaging practice. That fire modifies peatland characteristics at a range of scales is clearly understood. Whether these changes are perceived as positive or negative depends upon how trade-offs are made between ecosystem services and the spatial and temporal scales of concern. Here we explore the complex interactions and trade-offs in peatland fire management, evaluating the benefits and costs of managed fire as they are currently understood. We highlight the need for (i) distinguishing between the impacts of fires occurring with differing severity and frequency, and (ii) improved characterization of ecosystem health that incorporates the response and recovery of peatlands to fire. We also explore how recent research has been contextualized within both scientific publications and the wider media and how this can influence non-specialist perceptions. We emphasize the need for an informed, unbiased debate on fire as an ecological management tool that is separated from other aspects of moorland management and from political and economic opinions.This article is part of the themed issue 'The interaction of fire and mankind'. © 2016 The Authors.

  8. The role of fire in UK peatland and moorland management: the need for informed, unbiased debate

    Science.gov (United States)

    Davies, G. Matt; Kettridge, Nicholas; Stoof, Cathelijne R.; Gray, Alan; Ascoli, Davide; Fernandes, Paulo M.; Marrs, Rob; Clay, Gareth D.; McMorrow, Julia; Vandvik, Vigdis

    2016-01-01

    Fire has been used for centuries to generate and manage some of the UK's cultural landscapes. Despite its complex role in the ecology of UK peatlands and moorlands, there has been a trend of simplifying the narrative around burning to present it as an only ecologically damaging practice. That fire modifies peatland characteristics at a range of scales is clearly understood. Whether these changes are perceived as positive or negative depends upon how trade-offs are made between ecosystem services and the spatial and temporal scales of concern. Here we explore the complex interactions and trade-offs in peatland fire management, evaluating the benefits and costs of managed fire as they are currently understood. We highlight the need for (i) distinguishing between the impacts of fires occurring with differing severity and frequency, and (ii) improved characterization of ecosystem health that incorporates the response and recovery of peatlands to fire. We also explore how recent research has been contextualized within both scientific publications and the wider media and how this can influence non-specialist perceptions. We emphasize the need for an informed, unbiased debate on fire as an ecological management tool that is separated from other aspects of moorland management and from political and economic opinions. This article is part of the themed issue ‘The interaction of fire and mankind’. PMID:27216512

  9. Sleep habits, insomnia, and daytime sleepiness in a large and healthy community-based sample of New Zealanders.

    Science.gov (United States)

    Wilsmore, Bradley R; Grunstein, Ronald R; Fransen, Marlene; Woodward, Mark; Norton, Robyn; Ameratunga, Shanthi

    2013-06-15

    To determine the relationship between sleep complaints, primary insomnia, excessive daytime sleepiness, and lifestyle factors in a large community-based sample. Cross-sectional study. Blood donor sites in New Zealand. 22,389 individuals aged 16-84 years volunteering to donate blood. N/A. A comprehensive self-administered questionnaire including personal demographics and validated questions assessing sleep disorders (snoring, apnea), sleep complaints (sleep quantity, sleep dissatisfaction), insomnia symptoms, excessive daytime sleepiness, mood, and lifestyle factors such as work patterns, smoking, alcohol, and illicit substance use. Additionally, direct measurements of height and weight were obtained. One in three participants report healthy sample) was associated with insomnia (odds ratio [OR] 1.75, 95% confidence interval [CI] 1.50 to 2.05), depression (OR 2.01, CI 1.74 to 2.32), and sleep disordered breathing (OR 1.92, CI 1.59 to 2.32). Long work hours, alcohol dependence, and rotating work shifts also increase the risk of daytime sleepiness. Even in this relatively young, healthy, non-clinical sample, sleep complaints and primary insomnia with subsequent excess daytime sleepiness were common. There were clear associations between many personal and lifestyle factors-such as depression, long work hours, alcohol dependence, and rotating shift work-and sleep problems or excessive daytime sleepiness.

  10. Next generation sensing platforms for extended deployments in large-scale, multidisciplinary, adaptive sampling and observational networks

    Science.gov (United States)

    Cross, J. N.; Meinig, C.; Mordy, C. W.; Lawrence-Slavas, N.; Cokelet, E. D.; Jenkins, R.; Tabisola, H. M.; Stabeno, P. J.

    2016-12-01

    New autonomous sensors have dramatically increased the resolution and accuracy of oceanographic data collection, enabling rapid sampling over extremely fine scales. Innovative new autonomous platofrms like floats, gliders, drones, and crawling moorings leverage the full potential of these new sensors by extending spatiotemporal reach across varied environments. During 2015 and 2016, The Innovative Technology for Arctic Exploration Program at the Pacific Marine Environmental Laboratory tested several new types of fully autonomous platforms with increased speed, durability, and power and payload capacity designed to deliver cutting-edge ecosystem assessment sensors to remote or inaccessible environments. The Expendable Ice-Tracking (EXIT) gloat developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) is moored near bottom during the ice-free season and released on an autonomous timer beneath the ice during the following winter. The float collects a rapid profile during ascent, and continues to collect critical, poorly-accessible under-ice data until melt, when data is transmitted via satellite. The autonomous Oculus sub-surface glider developed by the University of Washington and PMEL has a large power and payload capacity and an enhanced buoyancy engine. This 'coastal truck' is designed for the rapid water column ascent required by optical imaging systems. The Saildrone is a solar and wind powered ocean unmanned surface vessel (USV) developed by Saildrone, Inc. in partnership with PMEL. This large-payload (200 lbs), fast (1-7 kts), durable (46 kts winds) platform was equipped with 15 sensors designed for ecosystem assessment during 2016, including passive and active acoustic systems specially redesigned for autonomous vehicle deployments. The senors deployed on these platforms achieved rigorous accuracy and precision standards. These innovative platforms provide new sampling capabilities and cost efficiencies in high-resolution sensor deployment

  11. Use of respondent driven sampling (RDS generates a very diverse sample of men who have sex with men (MSM in Buenos Aires, Argentina.

    Directory of Open Access Journals (Sweden)

    Alex Carballo-Diéguez

    Full Text Available Prior research focusing on men who have sex with men (MSM conducted in Buenos Aires, Argentina, used convenience samples that included mainly gay identified men. To increase MSM sample representativeness, we used Respondent Driven Sampling (RDS for the first time in Argentina. Using RDS, under certain specified conditions, the observed estimates for the percentage of the population with a specific trait are asymptotically unbiased. We describe, the diversity of the recruited sample, from the point of view of sexual orientation, and contrast the different subgroups in terms of their HIV sexual risk behavior.500 MSM were recruited using RDS. Behavioral data were collected through face-to-face interviews and Web-based CASI.In contrast with prior studies, RDS generated a very diverse sample of MSM from a sexual identity perspective. Only 24.5% of participants identified as gay; 36.2% identified as bisexual, 21.9% as heterosexual, and 17.4% were grouped as "other." Gay and non-gay identified MSM differed significantly in their sexual behavior, the former having higher numbers of partners, more frequent sexual contacts and less frequency of condom use. One third of the men (gay, 3%; bisexual, 34%, heterosexual, 51%; other, 49% reported having had sex with men, women and transvestites in the two months prior to the interview. This population requires further study and, potentially, HIV prevention strategies tailored to such diversity of partnerships. Our results highlight the potential effectiveness of using RDS to reach non-gay identified MSM. They also present lessons learned in the implementation of RDS to recruit MSM concerning both the importance and limitations of formative work, the need to tailor incentives to circumstances of the less affluent potential participants, the need to prevent masking, and the challenge of assessing network size.

  12. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  13. Neutron activation analysis of archaeological artifacts using the conventional relative method: a realistic approach for analysis of large samples

    International Nuclear Information System (INIS)

    Bedregal, P.S.; Mendoza, A.; Montoya, E.H.; Cohen, I.M.; Universidad Tecnologica Nacional, Buenos Aires; Oscar Baltuano

    2012-01-01

    A new approach for analysis of entire potsherds of archaeological interest by INAA, using the conventional relative method, is described. The analytical method proposed involves, primarily, the preparation of replicates of the original archaeological pottery, with well known chemical composition (standard), destined to be irradiated simultaneously, in a well thermalized external neutron beam of the RP-10 reactor, with the original object (sample). The basic advantage of this proposal is to avoid the need of performing complicated effect corrections when dealing with large samples, due to neutron self shielding, neutron self-thermalization and gamma ray attenuation. In addition, and in contrast with the other methods, the main advantages are the possibility of evaluating the uncertainty of the results and, fundamentally, validating the overall methodology. (author)

  14. Personality traits and eating habits in a large sample of Estonians.

    Science.gov (United States)

    Mõttus, René; Realo, Anu; Allik, Jüri; Deary, Ian J; Esko, Tõnu; Metspalu, Andres

    2012-11-01

    Diet has health consequences, which makes knowing the psychological correlates of dietary habits important. Associations between dietary habits and personality traits were examined in a large sample of Estonians (N = 1,691) aged between 18 and 89 years. Dietary habits were measured using 11 items, which grouped into two factors reflecting (a) health aware and (b) traditional dietary patterns. The health aware diet factor was defined by eating more cereal and dairy products, fish, vegetables and fruits. The traditional diet factor was defined by eating more potatoes, meat and meat products, and bread. Personality was assessed by participants themselves and by people who knew them well. The questionnaire used was the NEO Personality Inventory-3, which measures the Five-Factor Model personality broad traits of Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness, along with six facets for each trait. Gender, age and educational level were controlled for. Higher scores on the health aware diet factor were associated with lower Neuroticism, and higher Extraversion, Openness and Conscientiousness (effect sizes were modest: r = .11 to 0.17 in self-ratings, and r = .08 to 0.11 in informant-ratings, ps < 0.01 or lower). Higher scores on the traditional diet factor were related to lower levels of Openness (r = -0.14 and -0.13, p < .001, self- and informant-ratings, respectively). Endorsement of healthy and avoidance of traditional dietary items are associated with people's personality trait levels, especially higher Openness. The results may inform dietary interventions with respect to possible barriers to diet change.

  15. Genomic divergences among cattle, dog and human estimated from large-scale alignments of genomic sequences

    Directory of Open Access Journals (Sweden)

    Shade Larry L

    2006-06-01

    Full Text Available Abstract Background Approximately 11 Mb of finished high quality genomic sequences were sampled from cattle, dog and human to estimate genomic divergences and their regional variation among these lineages. Results Optimal three-way multi-species global sequence alignments for 84 cattle clones or loci (each >50 kb of genomic sequence were constructed using the human and dog genome assemblies as references. Genomic divergences and substitution rates were examined for each clone and for various sequence classes under different functional constraints. Analysis of these alignments revealed that the overall genomic divergences are relatively constant (0.32–0.37 change/site for pairwise comparisons among cattle, dog and human; however substitution rates vary across genomic regions and among different sequence classes. A neutral mutation rate (2.0–2.2 × 10(-9 change/site/year was derived from ancestral repetitive sequences, whereas the substitution rate in coding sequences (1.1 × 10(-9 change/site/year was approximately half of the overall rate (1.9–2.0 × 10(-9 change/site/year. Relative rate tests also indicated that cattle have a significantly faster rate of substitution as compared to dog and that this difference is about 6%. Conclusion This analysis provides a large-scale and unbiased assessment of genomic divergences and regional variation of substitution rates among cattle, dog and human. It is expected that these data will serve as a baseline for future mammalian molecular evolution studies.

  16. The perspectives of clinical staff and bereaved informal care-givers on the use of continuous sedation until death for cancer patients: The study protocol of the UNBIASED study

    Directory of Open Access Journals (Sweden)

    van der Heide Agnes

    2011-03-01

    Full Text Available Abstract Background A significant minority of dying people experience refractory symptoms or extreme distress unresponsive to conventional therapies. In such circumstances, sedation may be used to decrease or remove consciousness until death occurs. This practice is described in a variety of ways, including: 'palliative sedation', 'terminal sedation', 'continuous deep sedation until death', 'proportionate sedation' or 'palliative sedation to unconsciousness'. Surveys show large unexplained variation in incidence of sedation at the end of life across countries and care settings and there are ethical concerns about the use, intentions, risks and significance of the practice in palliative care. There are also questions about how to explain international variation in the use of the practice. This protocol relates to the UNBIASED study (UK Netherlands Belgium International Sedation Study, which comprises three linked studies with separate funding sources in the UK, Belgium and the Netherlands. The aims of the study are to explore decision-making surrounding the application of continuous sedation until death in contemporary clinical practice, and to understand the experiences of clinical staff and decedents' informal care-givers of the use of continuous sedation until death and their perceptions of its contribution to the dying process. The UNBIASED study is part of the European Association for Palliative Care Research Network. Methods/Design To realize the study aims, a two-phase study has been designed. The study settings include: the domestic home, hospital and expert palliative care sites. Phase 1 consists of: a focus groups with health care staff and bereaved informal care-givers; and b a preliminary case notes review to study the range of sedation therapy provided at the end of life to cancer patients who died within a 12 week period. Phase 2 employs qualitative methods to develop 30 patient-centred case studies in each country. These involve

  17. Diversity in the stellar velocity dispersion profiles of a large sample of brightest cluster galaxies z ≤ 0.3

    Science.gov (United States)

    Loubser, S. I.; Hoekstra, H.; Babul, A.; O'Sullivan, E.

    2018-06-01

    We analyse spatially resolved deep optical spectroscopy of brightestcluster galaxies (BCGs) located in 32 massive clusters with redshifts of 0.05 ≤ z ≤ 0.30 to investigate their velocity dispersion profiles. We compare these measurements to those of other massive early-type galaxies, as well as central group galaxies, where relevant. This unique, large sample extends to the most extreme of massive galaxies, spanning MK between -25.7 and -27.8 mag, and host cluster halo mass M500 up to 1.7 × 1015 M⊙. To compare the kinematic properties between brightest group and cluster members, we analyse similar spatially resolved long-slit spectroscopy for 23 nearby brightest group galaxies (BGGs) from the Complete Local-Volume Groups Sample. We find a surprisingly large variety in velocity dispersion slopes for BCGs, with a significantly larger fraction of positive slopes, unique compared to other (non-central) early-type galaxies as well as the majority of the brightest members of the groups. We find that the velocity dispersion slopes of the BCGs and BGGs correlate with the luminosity of the galaxies, and we quantify this correlation. It is not clear whether the full diversity in velocity dispersion slopes that we see is reproduced in simulations.

  18. Gravimetric dust sampling for control purposes and occupational dust sampling.

    CSIR Research Space (South Africa)

    Unsted, AD

    1997-02-01

    Full Text Available Prior to the introduction of gravimetric dust sampling, konimeters had been used for dust sampling, which was largely for control purposes. Whether or not absolute results were achievable was not an issue since relative results were used to evaluate...

  19. Effect of NaOH on large-volume sample stacking of haloacetic acids in capillary zone electrophoresis with a low-pH buffer.

    Science.gov (United States)

    Tu, Chuanhong; Zhu, Lingyan; Ang, Chay Hoon; Lee, Hian Kee

    2003-06-01

    Large-volume sample stacking (LVSS) is an effective on-capillary sample concentration method in capillary zone electrophoresis, which can be applied to the sample in a low-conductivity matrix. NaOH solution is commonly used to back-extract acidic compounds from organic solvent in sample pretreatment. The effect of NaOH as sample matrix on LVSS of haloacetic acids was investigated in this study. It was found that the presence of NaOH in sample did not compromise, but rather help the sample stacking performance if a low pH background electrolyte (BGE) was used. The sensitivity enhancement factor was higher than the case when sample was dissolved in pure water or diluted BGE. Compared with conventional injection (0.4% capillary volume), 97-120-fold sensitivity enhancement in terms of peak height was obtained without deterioration of separation with an injection amount equal to 20% of the capillary volume. This method was applied to determine haloacetic acids in tap water by combination with liquid-liquid extraction and back-extraction into NaOH solution. Limits of detection at sub-ppb levels were obtained for real samples with direct UV detection.

  20. A Test of Macromolecular Crystallization in Microgravity: Large, Well-Ordered Insulin Crystals

    Science.gov (United States)

    Borgstahl, Gloria E. O.; Vahedi-Faridi, Ardeschir; Lovelace, Jeff; Bellamy, Henry D.; Snell, Edward H.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Crystals of insulin grown in microgravity on space shuttle mission STS-95 were extremely well-ordered and unusually large (many > 2 mm). The physical characteristics of six microgravity and six earth-grown crystals were examined by X-ray analysis employing superfine f slicing and unfocused synchrotron radiation. This experimental setup allowed hundreds of reflections to be precisely examined for each crystal in a short period of time. The microgravity crystals were on average 34 times larger, had 7 times lower mosaicity, had 54 times higher reflection peak heights and diffracted to significantly higher resolution than their earth grown counterparts. A single mosaic domain model could account for reflections in microgravity crystals whereas reflections from earth crystals required a model with multiple mosaic domains. This statistically significant and unbiased characterization indicates that the microgravity environment was useful for the improvement of crystal growth and resultant diffraction quality in insulin crystals and may be similarly useful for macromolecular crystals in general.

  1. Effects of large volume injection of aliphatic alcohols as sample diluents on the retention of low hydrophobic solutes in reversed-phase liquid chromatography.

    Science.gov (United States)

    David, Victor; Galaon, Toma; Aboul-Enein, Hassan Y

    2014-01-03

    Recent studies showed that injection of large volume of hydrophobic solvents used as sample diluents could be applied in reversed-phase liquid chromatography (RP-LC). This study reports a systematic research focused on the influence of a series of aliphatic alcohols (from methanol to 1-octanol) on the retention process in RP-LC, when large volumes of sample are injected on the column. Several model analytes with low hydrophobic character were studied by RP-LC process, for mobile phases containing methanol or acetonitrile as organic modifiers in different proportions with aqueous component. It was found that starting with 1-butanol, the aliphatic alcohols can be used as sample solvents and they can be injected in high volumes, but they may influence the retention factor and peak shape of the dissolved solutes. The dependence of the retention factor of the studied analytes on the injection volume of these alcohols is linear, with a decrease of its value as the sample volume is increased. The retention process in case of injecting up to 200μL of upper alcohols is dependent also on the content of the organic modifier (methanol or acetonitrile) in mobile phase. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Soil Characterization by Large Scale Sampling of Soil Mixed with Buried Construction Debris at a Former Uranium Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Nardi, A.J.; Lamantia, L.

    2009-01-01

    Recent soil excavation activities on a site identified the presence of buried uranium contaminated building construction debris. The site previously was the location of a low enriched uranium fuel fabrication facility. This resulted in the collection of excavated materials from the two locations where contaminated subsurface debris was identified. The excavated material was temporarily stored in two piles on the site until a determination could be made as to the appropriate disposition of the material. Characterization of the excavated material was undertaken in a manner that involved the collection of large scale samples of the excavated material in 1 cubic meter Super Sacks. Twenty bags were filled with excavated material that consisted of the mixture of both the construction debris and the associated soil. In order to obtain information on the level of activity associated with the construction debris, ten additional bags were filled with construction debris that had been separated, to the extent possible, from the associated soil. Radiological surveys were conducted of the resulting bags of collected materials and the soil associated with the waste mixture. The 30 large samples, collected as bags, were counted using an In-Situ Object Counting System (ISOCS) unit to determine the average concentration of U-235 present in each bag. The soil fraction was sampled by the collection of 40 samples of soil for analysis in an on-site laboratory. A fraction of these samples were also sent to an off-site laboratory for additional analysis. This project provided the necessary soil characterization information to allow consideration of alternate options for disposition of the material. The identified contaminant was verified to be low enriched uranium. Concentrations of uranium in the waste were found to be lower than the calculated site specific derived concentration guideline levels (DCGLs) but higher than the NRC's screening values. The methods and results are presented

  3. Cardinality Estimation Algorithm in Large-Scale Anonymous Wireless Sensor Networks

    KAUST Repository

    Douik, Ahmed

    2017-08-30

    Consider a large-scale anonymous wireless sensor network with unknown cardinality. In such graphs, each node has no information about the network topology and only possesses a unique identifier. This paper introduces a novel distributed algorithm for cardinality estimation and topology discovery, i.e., estimating the number of node and structure of the graph, by querying a small number of nodes and performing statistical inference methods. While the cardinality estimation allows the design of more efficient coding schemes for the network, the topology discovery provides a reliable way for routing packets. The proposed algorithm is shown to produce a cardinality estimate proportional to the best linear unbiased estimator for dense graphs and specific running times. Simulation results attest the theoretical results and reveal that, for a reasonable running time, querying a small group of nodes is sufficient to perform an estimation of 95% of the whole network. Applications of this work include estimating the number of Internet of Things (IoT) sensor devices, online social users, active protein cells, etc.

  4. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  5. Multilevel systematic sampling to estimate total fruit number for yield forecasts

    DEFF Research Database (Denmark)

    Wulfsohn, Dvora-Laio; Zamora, Felipe Aravena; Tellez, Camilla Potin

    2012-01-01

    procedure for unbiased estimation of fruit number for yield forecasts. In the Spring of 2009 we estimated the total number of fruit in several rows of each of 14 commercial fruit orchards growing apple (11 groves), kiwifruit (two groves), and table grapes (one grove) in central Chile. Survey times were 10...

  6. PROBES, POPULATIONS, SAMPLES, MEASUREMENTS AND RELATIONS IN STEREOLOGY

    Directory of Open Access Journals (Sweden)

    Robert T Dehoff

    2011-05-01

    Full Text Available This summary paper provides an overview of the content of stereology. The typical problem at hand centers around some three dimensional object that has an internal structure that determines its function, performance, or response. To understand and quantify the geometry of that structure it is necessary to probe it with geometric entities: points, lines, planes volumes, etc. Meaningful results are obtained only if the set of probes chosen for use in the assessment is drawn uniformly from the population of such probes for the structure as a whole. This requires an understanding of the population of each kind of probe. Interaction of the probes with the structure produce geometric events which are the focus of stereological measurements. In almost all applications the measurement that is made is a simple count of the number of these events. Rigorous application of these requirements for sample design produce unbiased estimates of geometric properties of features in the structure no matter how complex are the features or what their arrangement in space. It is this assumption-free characteristic of the methodology that makes it a powerful tool for characterizing the internal structure of three dimensional objects.

  7. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students.

    Science.gov (United States)

    Wang, Song; Zhou, Ming; Chen, Taolin; Yang, Xun; Chen, Guangxiang; Wang, Meiyun; Gong, Qiyong

    2017-04-18

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphometry (VBM) approach. The whole-brain regression analyses showed that higher academic performance was related to greater regional gray matter density (rGMD) of the left dorsolateral prefrontal cortex (DLPFC), which is considered a neural center at the intersection of cognitive and non-cognitive functions. Furthermore, mediation analyses suggested that general intelligence partially mediated the impact of the left DLPFC density on academic performance. These results persisted even after adjusting for the effect of family socioeconomic status (SES). In short, our findings reveal a potential neuroanatomical marker for academic performance and highlight the role of general intelligence in explaining the relationship between brain structure and academic performance.

  8. Geochemical Data for Samples Collected in 2007 Near the Concealed Pebble Porphyry Cu-Au-Mo Deposit, Southwest Alaska

    Science.gov (United States)

    Fey, David L.; Granitto, Matthew; Giles, Stuart A.; Smith, Steven M.; Eppinger, Robert G.; Kelley, Karen D.

    2008-01-01

    In the summer of 2007, the U.S. Geological Survey (USGS) began an exploration geochemical research study over the Pebble porphyry copper-gold-molydenum (Cu-Au-Mo) deposit in southwest Alaska. The Pebble deposit is extremely large and is almost entirely concealed by tundra, glacial deposits, and post-Cretaceous volcanic and volcaniclastic rocks. The deposit is presently being explored by Northern Dynasty Minerals, Ltd., and Anglo-American LLC. The USGS undertakes unbiased, broad-scale mineral resource assessments of government lands to provide Congress and citizens with information on national mineral endowment. Research on known deposits is also done to refine and better constrain methods and deposit models for the mineral resource assessments. The Pebble deposit was chosen for this study because it is concealed by surficial cover rocks, it is relatively undisturbed (except for exploration company drill holes), it is a large mineral system, and it is fairly well constrained at depth by the drill hole geology and geochemistry. The goals of the USGS study are (1) to determine whether the concealed deposit can be detected with surface samples, (2) to better understand the processes of metal migration from the deposit to the surface, and (3) to test and develop methods for assessing mineral resources in similar concealed terrains. This report presents analytical results for geochemical samples collected in 2007 from the Pebble deposit and surrounding environs. The analytical data are presented digitally both as an integrated Microsoft 2003 Access? database and as Microsoft 2003 Excel? files. The Pebble deposit is located in southwestern Alaska on state lands about 30 km (18 mi) northwest of the village of Illiamna and 320 km (200 mi) southwest of Anchorage (fig. 1). Elevations in the Pebble area range from 287 m (940 ft) at Frying Pan Lake just south of the deposit to 1146 m (3760 ft) on Kaskanak Mountain about 5 km (5 mi) to the west. The deposit is in an area of

  9. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  10. Comparing relative abundance, lengths, and habitat of temperate reef fishes using simultaneous underwater visual census, video, and trap sampling

    KAUST Repository

    Bacheler, NM

    2017-04-28

    Unbiased counts of individuals or species are often impossible given the prevalence of cryptic or mobile species. We used 77 simultaneous multi-gear deployments to make inferences about relative abundance, diversity, length composition, and habitat of the reef fish community along the southeastern US Atlantic coast. In total, 117 taxa were observed by underwater visual census (UVC), stationary video, and chevron fish traps, with more taxa being observed by UVC (100) than video (82) or traps (20). Frequency of occurrence of focal species was similar among all sampling approaches for tomtate Haemulon aurolineatum and black sea bass Centropristis striata, higher for UVC and video compared to traps for red snapper Lutjanus campechanus, vermilion snapper Rhomboplites aurorubens, and gray triggerfish Balistes capriscus, and higher for UVC compared to video or traps for gray snapper L. griseus and lionfish Pterois spp. For 6 of 7 focal species, correlations of relative abundance among gears were strongest between UVC and video, but there was substantial variability among species. The number of recorded species between UVC and video was correlated (ρ = 0.59), but relationships between traps and the other 2 methods were weaker. Lengths of fish visually estimated by UVC were similar to lengths of fish caught in traps, as were habitat characterizations from UVC and video. No gear provided a complete census for any species in our study, suggesting that analytical methods accounting for imperfect detection are necessary to make unbiased inferences about fish abundance.

  11. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  12. Testing the Pareto against the lognormal distributions with the uniformly most powerful unbiased test applied to the distribution of cities.

    Science.gov (United States)

    Malevergne, Yannick; Pisarenko, Vladilen; Sornette, Didier

    2011-03-01

    Fat-tail distributions of sizes abound in natural, physical, economic, and social systems. The lognormal and the power laws have historically competed for recognition with sometimes closely related generating processes and hard-to-distinguish tail properties. This state-of-affair is illustrated with the debate between Eeckhout [Amer. Econ. Rev. 94, 1429 (2004)] and Levy [Amer. Econ. Rev. 99, 1672 (2009)] on the validity of Zipf's law for US city sizes. By using a uniformly most powerful unbiased (UMPU) test between the lognormal and the power-laws, we show that conclusive results can be achieved to end this debate. We advocate the UMPU test as a systematic tool to address similar controversies in the literature of many disciplines involving power laws, scaling, "fat" or "heavy" tails. In order to demonstrate that our procedure works for data sets other than the US city size distribution, we also briefly present the results obtained for the power-law tail of the distribution of personal identity (ID) losses, which constitute one of the major emergent risks at the interface between cyberspace and reality.

  13. Unbiased determination of the proton structure function F2p with faithful uncertainty estimation

    International Nuclear Information System (INIS)

    Del Debbio, Luigi; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea

    2005-01-01

    We construct a parametrization of the deep-inelastic structure function of the proton F 2 (x,Q 2 ) based on all available experimental information from charged lepton deep-inelastic scattering experiments. The parametrization effectively provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. The result is obtained in the form of a Monte Carlo sample of neural networks trained on an ensemble of replicas of the experimental data. We discuss in detail the techniques required for the construction of bias-free parameterizations of large amounts of structure function data, in view of future applications to the determination of parton distributions based on the same method. (author)

  14. SU(2 and SU(1,1 Approaches to Phase Operators and Temporally Stable Phase States: Applications to Mutually Unbiased Bases and Discrete Fourier Transforms

    Directory of Open Access Journals (Sweden)

    Maurice R. Kibler

    2010-07-01

    Full Text Available We propose a group-theoretical approach to the generalized oscillator algebra Aκ recently investigated in J. Phys. A: Math. Theor. 2010, 43, 115303. The case κ ≥ 0 corresponds to the noncompact group SU(1,1 (as for the harmonic oscillator and the Pöschl-Teller systems while the case κ < 0 is described by the compact group SU(2 (as for the Morse system. We construct the phase operators and the corresponding temporally stable phase eigenstates for Aκ in this group-theoretical context. The SU(2 case is exploited for deriving families of mutually unbiased bases used in quantum information. Along this vein, we examine some characteristics of a quadratic discrete Fourier transform in connection with generalized quadratic Gauss sums and generalized Hadamard matrices.

  15. Applying network analysis and Nebula (neighbor-edges based and unbiased leverage algorithm) to ToxCast data.

    Science.gov (United States)

    Ye, Hao; Luo, Heng; Ng, Hui Wen; Meehan, Joe; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2016-01-01

    ToxCast data have been used to develop models for predicting in vivo toxicity. To predict the in vivo toxicity of a new chemical using a ToxCast data based model, its ToxCast bioactivity data are needed but not normally available. The capability of predicting ToxCast bioactivity data is necessary to fully utilize ToxCast data in the risk assessment of chemicals. We aimed to understand and elucidate the relationships between the chemicals and bioactivity data of the assays in ToxCast and to develop a network analysis based method for predicting ToxCast bioactivity data. We conducted modularity analysis on a quantitative network constructed from ToxCast data to explore the relationships between the assays and chemicals. We further developed Nebula (neighbor-edges based and unbiased leverage algorithm) for predicting ToxCast bioactivity data. Modularity analysis on the network constructed from ToxCast data yielded seven modules. Assays and chemicals in the seven modules were distinct. Leave-one-out cross-validation yielded a Q(2) of 0.5416, indicating ToxCast bioactivity data can be predicted by Nebula. Prediction domain analysis showed some types of ToxCast assay data could be more reliably predicted by Nebula than others. Network analysis is a promising approach to understand ToxCast data. Nebula is an effective algorithm for predicting ToxCast bioactivity data, helping fully utilize ToxCast data in the risk assessment of chemicals. Published by Elsevier Ltd.

  16. Development and application of spatial and temporal statistical methods for unbiased wildlife sampling

    NARCIS (Netherlands)

    Khaemba, W.M.

    2000-01-01

    Current methods of obtaining information on wildlife populations are based on monitoring programmes using periodic surveys. In most cases aerial techniques are applied. Reported numbers are, however, often biased and imprecise, making it difficult to use this information for management

  17. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  18. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    Science.gov (United States)

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  19. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  20. The joint fit of the BHMF and ERDF for the BAT AGN Sample

    Science.gov (United States)

    Weigel, Anna K.; Koss, Michael; Ricci, Claudio; Trakhtenbrot, Benny; Oh, Kyuseok; Schawinski, Kevin; Lamperti, Isabella

    2018-01-01

    A natural product of an AGN survey is the AGN luminosity function. This statistical measure describes the distribution of directly measurable AGN luminosities. Intrinsically, the shape of the luminosity function depends on the distribution of black hole masses and Eddington ratios. To constrain these fundamental AGN properties, the luminosity function thus has to be disentangled into the black hole mass and Eddington ratio distribution function. The BASS survey is unique as it allows such a joint fit for a large number of local AGN, is unbiased in terms of obscuration in the X-rays and provides black hole masses for type-1 and type-2 AGN. The black hole mass function at z ~ 0 represents an essential baseline for simulations and black hole growth models. The normalization of the Eddington ratio distribution function directly constrains the AGN fraction. Together, the BASS AGN luminosity, black hole mass and Eddington ratio distribution functions thus provide a complete picture of the local black hole population.

  1. Survey of large protein complexes D. vulgaris reveals great structural diversity

    Energy Technology Data Exchange (ETDEWEB)

    Han, B.-G.; Dong, M.; Liu, H.; Camp, L.; Geller, J.; Singer, M.; Hazen, T. C.; Choi, M.; Witkowska, H. E.; Ball, D. A.; Typke, D.; Downing, K. H.; Shatsky, M.; Brenner, S. E.; Chandonia, J.-M.; Biggin, M. D.; Glaeser, R. M.

    2009-08-15

    An unbiased survey has been made of the stable, most abundant multi-protein complexes in Desulfovibrio vulgaris Hildenborough (DvH) that are larger than Mr {approx} 400 k. The quaternary structures for 8 of the 16 complexes purified during this work were determined by single-particle reconstruction of negatively stained specimens, a success rate {approx}10 times greater than that of previous 'proteomic' screens. In addition, the subunit compositions and stoichiometries of the remaining complexes were determined by biochemical methods. Our data show that the structures of only two of these large complexes, out of the 13 in this set that have recognizable functions, can be modeled with confidence based on the structures of known homologs. These results indicate that there is significantly greater variability in the way that homologous prokaryotic macromolecular complexes are assembled than has generally been appreciated. As a consequence, we suggest that relying solely on previously determined quaternary structures for homologous proteins may not be sufficient to properly understand their role in another cell of interest.

  2. Large-scale prospective T cell function assays in shipped, unfrozen blood samples

    DEFF Research Database (Denmark)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J

    2014-01-01

    , for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within...... cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities...... North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T...

  3. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  5. A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia

    DEFF Research Database (Denmark)

    Ingason, Andrés; Giegling, Ina; Cichon, Sven

    2010-01-01

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample....... Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both...... as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia....

  6. Method for Determination of Neptunium in Large-Sized Urine Samples Using Manganese Dioxide Coprecipitation and 242Pu as Yield Tracer

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2013-01-01

    A novel method for bioassay of large volumes of human urine samples using manganese dioxide coprecipitation for preconcentration was developed for rapid determination of 237Np. 242Pu was utilized as a nonisotopic tracer to monitor the chemical yield of 237Np. A sequential injection extraction chr...... and rapid analysis of neptunium contamination level for emergency preparedness....

  7. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    Science.gov (United States)

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  8. MEASURING X-RAY VARIABILITY IN FAINT/SPARSELY SAMPLED ACTIVE GALACTIC NUCLEI

    Energy Technology Data Exchange (ETDEWEB)

    Allevato, V. [Department of Physics, University of Helsinki, Gustaf Haellstroemin katu 2a, FI-00014 Helsinki (Finland); Paolillo, M. [Department of Physical Sciences, University Federico II, via Cinthia 6, I-80126 Naples (Italy); Papadakis, I. [Department of Physics and Institute of Theoretical and Computational Physics, University of Crete, 71003 Heraklion (Greece); Pinto, C. [SRON Netherlands Institute for Space Research, Sorbonnelaan 2, 3584-CA Utrecht (Netherlands)

    2013-07-01

    We study the statistical properties of the normalized excess variance of variability process characterized by a ''red-noise'' power spectral density (PSD), as in the case of active galactic nuclei (AGNs). We perform Monte Carlo simulations of light curves, assuming both a continuous and a sparse sampling pattern and various signal-to-noise ratios (S/Ns). We show that the normalized excess variance is a biased estimate of the variance even in the case of continuously sampled light curves. The bias depends on the PSD slope and on the sampling pattern, but not on the S/N. We provide a simple formula to account for the bias, which yields unbiased estimates with an accuracy better than 15%. We show that the normalized excess variance estimates based on single light curves (especially for sparse sampling and S/N < 3) are highly uncertain (even if corrected for bias) and we propose instead the use of an ''ensemble estimate'', based on multiple light curves of the same object, or on the use of light curves of many objects. These estimates have symmetric distributions, known errors, and can also be corrected for biases. We use our results to estimate the ability to measure the intrinsic source variability in current data, and show that they could also be useful in the planning of the observing strategy of future surveys such as those provided by X-ray missions studying distant and/or faint AGN populations and, more in general, in the estimation of the variability amplitude of sources that will result from future surveys such as Pan-STARRS and LSST.

  9. Post-traumatic stress syndrome in a large sample of older adults: determinants and quality of life.

    Science.gov (United States)

    Lamoureux-Lamarche, Catherine; Vasiliadis, Helen-Maria; Préville, Michel; Berbiche, Djamal

    2016-01-01

    The aims of this study are to assess in a sample of older adults consulting in primary care practices the determinants and quality of life associated with post-traumatic stress syndrome (PTSS). Data used came from a large sample of 1765 community-dwelling older adults who were waiting to receive health services in primary care clinics in the province of Quebec. PTSS was measured with the PTSS scale. Socio-demographic and clinical characteristics were used as potential determinants of PTSS. Quality of life was measured with the EuroQol-5D-3L (EQ-5D-3L) EQ-Visual Analog Scale and the Satisfaction With Your Life Scale. Multivariate logistic and linear regression models were used to study the presence of PTSS and different measures of health-related quality of life and quality of life as a function of study variables. The six-month prevalence of PTSS was 11.0%. PTSS was associated with age, marital status, number of chronic disorders and the presence of an anxiety disorder. PTSS was also associated with the EQ-5D-3L and the Satisfaction with Your Life Scale. PTSS is prevalent in patients consulting in primary care practices. Primary care physicians should be aware that PTSS is also associated with a decrease in quality of life, which can further negatively impact health status.

  10. High-throughput genotyping assay for the large-scale genetic characterization of Cryptosporidium parasites from human and bovine samples.

    Science.gov (United States)

    Abal-Fabeiro, J L; Maside, X; Llovo, J; Bello, X; Torres, M; Treviño, M; Moldes, L; Muñoz, A; Carracedo, A; Bartolomé, C

    2014-04-01

    The epidemiological study of human cryptosporidiosis requires the characterization of species and subtypes involved in human disease in large sample collections. Molecular genotyping is costly and time-consuming, making the implementation of low-cost, highly efficient technologies increasingly necessary. Here, we designed a protocol based on MALDI-TOF mass spectrometry for the high-throughput genotyping of a panel of 55 single nucleotide variants (SNVs) selected as markers for the identification of common gp60 subtypes of four Cryptosporidium species that infect humans. The method was applied to a panel of 608 human and 63 bovine isolates and the results were compared with control samples typed by Sanger sequencing. The method allowed the identification of species in 610 specimens (90·9%) and gp60 subtype in 605 (90·2%). It displayed excellent performance, with sensitivity and specificity values of 87·3 and 98·0%, respectively. Up to nine genotypes from four different Cryptosporidium species (C. hominis, C. parvum, C. meleagridis and C. felis) were detected in humans; the most common ones were C. hominis subtype Ib, and C. parvum IIa (61·3 and 28·3%, respectively). 96·5% of the bovine samples were typed as IIa. The method performs as well as the widely used Sanger sequencing and is more cost-effective and less time consuming.

  11. Estimation of the simple correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2010-11-01

    This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.

  12. CHRONICITY OF DEPRESSION AND MOLECULAR MARKERS IN A LARGE SAMPLE OF HAN CHINESE WOMEN.

    Science.gov (United States)

    Edwards, Alexis C; Aggen, Steven H; Cai, Na; Bigdeli, Tim B; Peterson, Roseann E; Docherty, Anna R; Webb, Bradley T; Bacanu, Silviu-Alin; Flint, Jonathan; Kendler, Kenneth S

    2016-04-25

    Major depressive disorder (MDD) has been associated with changes in mean telomere length and mitochondrial DNA (mtDNA) copy number. This study investigates if clinical features of MDD differentially impact these molecular markers. Data from a large, clinically ascertained sample of Han Chinese women with recurrent MDD were used to examine whether symptom presentation, severity, and comorbidity were related to salivary telomere length and/or mtDNA copy number (maximum N = 5,284 for both molecular and phenotypic data). Structural equation modeling revealed that duration of longest episode was positively associated with mtDNA copy number, while earlier age of onset of most severe episode and a history of dysthymia were associated with shorter telomeres. Other factors, such as symptom presentation, family history of depression, and other comorbid internalizing disorders, were not associated with these molecular markers. Chronicity of depressive symptoms is related to more pronounced telomere shortening and increased mtDNA copy number among individuals with a history of recurrent MDD. As these molecular markers have previously been implicated in physiological aging and morbidity, individuals who experience prolonged depressive symptoms are potentially at greater risk of adverse medical outcomes. © 2016 Wiley Periodicals, Inc.

  13. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Test sample handling apparatus

    International Nuclear Information System (INIS)

    1981-01-01

    A test sample handling apparatus using automatic scintillation counting for gamma detection, for use in such fields as radioimmunoassay, is described. The apparatus automatically and continuously counts large numbers of samples rapidly and efficiently by the simultaneous counting of two samples. By means of sequential ordering of non-sequential counting data, it is possible to obtain precisely ordered data while utilizing sample carrier holders having a minimum length. (U.K.)

  15. Statistical searches for microlensing events in large, non-uniformly sampled time-domain surveys: A test using palomar transient factory data

    Energy Technology Data Exchange (ETDEWEB)

    Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)

    2014-01-20

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  16. The Data-Constrained Generalized Maximum Entropy Estimator of the GLM: Asymptotic Theory and Inference

    Directory of Open Access Journals (Sweden)

    Nicholas Scott Cardell

    2013-05-01

    Full Text Available Maximum entropy methods of parameter estimation are appealing because they impose no additional structure on the data, other than that explicitly assumed by the analyst. In this paper we prove that the data constrained GME estimator of the general linear model is consistent and asymptotically normal. The approach we take in establishing the asymptotic properties concomitantly identifies a new computationally efficient method for calculating GME estimates. Formulae are developed to compute asymptotic variances and to perform Wald, likelihood ratio, and Lagrangian multiplier statistical tests on model parameters. Monte Carlo simulations are provided to assess the performance of the GME estimator in both large and small sample situations. Furthermore, we extend our results to maximum cross-entropy estimators and indicate a variant of the GME estimator that is unbiased. Finally, we discuss the relationship of GME estimators to Bayesian estimators, pointing out the conditions under which an unbiased GME estimator would be efficient.

  17. A sampling scheme intended for tandem measurements of sodium transport and microvillous surface area in the coprodaeal epithelium of hens on high- and low-salt diets.

    Science.gov (United States)

    Mayhew, T M; Dantzer, V; Elbrønd, V S; Skadhauge, E

    1990-12-01

    A tissue sampling protocol for combined morphometric and physiological studies on the mucosa of the avian coprodaeum is presented. The morphometric goal is to estimate the surface area due to microvilli at the epithelial cell apex and the proposed scheme is illustrated using material from three White Plymouth Rock hens. The scheme is designed to satisfy sampling requirements for the unbiased estimation of surface areas by vertical sectioning coupled with cycloid test lines and it incorporates a number of useful internal checks. It relies on multi-level sampling with four levels of stereological estimation. At Level I, macroscopic estimates of coprodaeal volume are obtained. Light microscopy is employed at Level II to calculate epithelial volume density. Levels III and IV require low and high power electron microscopy to estimate the surface density of the epithelial apical border and the amplification factor due to microvilli. Worked examples of the calculation steps are provided.

  18. EFFECTS OF LONG-TERM ALENDRONATE TREATMENT ON A LARGE SAMPLE OF PEDIATRIC PATIENTS WITH OSTEOGENESIS IMPERFECTA.

    Science.gov (United States)

    Lv, Fang; Liu, Yi; Xu, Xiaojie; Wang, Jianyi; Ma, Doudou; Jiang, Yan; Wang, Ou; Xia, Weibo; Xing, Xiaoping; Yu, Wei; Li, Mei

    2016-12-01

    Osteogenesis imperfecta (OI) is a group of inherited diseases characterized by reduced bone mass, recurrent bone fractures, and progressive bone deformities. Here, we evaluate the efficacy and safety of long-term treatment with alendronate in a large sample of Chinese children and adolescents with OI. In this prospective study, a total of 91 children and adolescents with OI were included. The patients received 3 years' treatment with 70 mg alendronate weekly and 500 mg calcium daily. During the treatment, fracture incidence, bone mineral density (BMD), and serum levels of the bone turnover biomarkers (alkaline phosphatase [ALP] and cross-linked C-telopeptide of type I collagen [β-CTX]) were evaluated. Linear growth speed and parameters of safety were also measured. After 3 years of treatment, the mean annual fracture incidence decreased from 1.2 ± 0.8 to 0.2 ± 0.3 (Posteogenesis imperfecta PTH = parathyroid hormone.

  19. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  20. Photoconductivity of Selenium and Sulphur Doped a-Si:H thin Films

    OpenAIRE

    SHARMA, Sanjeev Kumar; GUPTA, Himanshu

    2014-01-01

    The accurate estimation of tree biomass is crucial for the efficient management of forest resources. In this study, we used a subsampling method for unbiased estimates of above-ground tree biomass. The method consists of 2 stages: the first stage consists of randomized branch sampling (RBS) and the second stage uses importance sampling (IS). RBS is used to select a path from the butt of an object branch to a terminal segment. IS is used for selecting a disk that produces unbiased estimates of...

  1. Explaining health care expenditure variation: large-sample evidence using linked survey and health administrative data.

    Science.gov (United States)

    Ellis, Randall P; Fiebig, Denzil G; Johar, Meliyanni; Jones, Glenn; Savage, Elizabeth

    2013-09-01

    Explaining individual, regional, and provider variation in health care spending is of enormous value to policymakers but is often hampered by the lack of individual level detail in universal public health systems because budgeted spending is often not attributable to specific individuals. Even rarer is self-reported survey information that helps explain this variation in large samples. In this paper, we link a cross-sectional survey of 267 188 Australians age 45 and over to a panel dataset of annual healthcare costs calculated from several years of hospital, medical and pharmaceutical records. We use this data to distinguish between cost variations due to health shocks and those that are intrinsic (fixed) to an individual over three years. We find that high fixed expenditures are positively associated with age, especially older males, poor health, obesity, smoking, cancer, stroke and heart conditions. Being foreign born, speaking a foreign language at home and low income are more strongly associated with higher time-varying expenditures, suggesting greater exposure to adverse health shocks. Copyright © 2013 John Wiley & Sons, Ltd.

  2. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  3. Construct validity of the Groningen Frailty Indicator established in a large sample of home-dwelling elderly persons : Evidence of stability across age and gender

    NARCIS (Netherlands)

    Peters, L. L.; Boter, H.; Burgerhof, J. G. M.; Slaets, J. P. J.; Buskens, E.

    Background: The primary objective of the present study was to evaluate the validity of the Groningen frailty Indicator (GFI) in a sample of Dutch elderly persons participating in LifeLines, a large population-based cohort study. Additional aims were to assess differences between frail and non-frail

  4. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    2017-08-21

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,” has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer

  5. BROAD ABSORPTION LINE DISAPPEARANCE ON MULTI-YEAR TIMESCALES IN A LARGE QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Filiz Ak, N.; Brandt, W. N.; Schneider, D. P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario M3J 1P3 (Canada); Anderson, S. F.; Gibson, R. R. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States); Lundgren, B. F. [Department of Physics, Yale University, New Haven, CT 06511 (United States); Myers, A. D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Petitjean, P. [Institut d' Astrophysique de Paris, Universite Paris 6, F-75014, Paris (France); Ross, Nicholas P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 92420 (United States); Shen Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-51, Cambridge, MA 02138 (United States); York, D. G. [Department of Astronomy and Astrophysics, and Enrico Fermi Institute, University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 (United States); Bizyaev, D.; Brinkmann, J.; Malanushenko, E.; Oravetz, D. J.; Pan, K.; Simmons, A. E. [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349-0059 (United States); Weaver, B. A., E-mail: nfilizak@astro.psu.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2012-10-01

    We present 21 examples of C IV broad absorption line (BAL) trough disappearance in 19 quasars selected from systematic multi-epoch observations of 582 bright BAL quasars (1.9 < z < 4.5) by the Sloan Digital Sky Survey-I/II (SDSS-I/II) and SDSS-III. The observations span 1.1-3.9 yr rest-frame timescales, longer than have been sampled in many previous BAL variability studies. On these timescales, Almost-Equal-To 2.3% of C IV BAL troughs disappear and Almost-Equal-To 3.3% of BAL quasars show a disappearing trough. These observed frequencies suggest that many C IV BAL absorbers spend on average at most a century along our line of sight to their quasar. Ten of the 19 BAL quasars showing C IV BAL disappearance have apparently transformed from BAL to non-BAL quasars; these are the first reported examples of such transformations. The BAL troughs that disappear tend to be those with small-to-moderate equivalent widths, relatively shallow depths, and high outflow velocities. Other non-disappearing C IV BALs in those nine objects having multiple troughs tend to weaken when one of them disappears, indicating a connection between the disappearing and non-disappearing troughs, even for velocity separations as large as 10,000-15,000 km s{sup -1}. We discuss possible origins of this connection including disk-wind rotation and changes in shielding gas.

  6. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  7. Comparison of blood RNA isolation methods from samples stabilized in Tempus tubes and stored at a large human biobank.

    Science.gov (United States)

    Aarem, Jeanette; Brunborg, Gunnar; Aas, Kaja K; Harbak, Kari; Taipale, Miia M; Magnus, Per; Knudsen, Gun Peggy; Duale, Nur

    2016-09-01

    More than 50,000 adult and cord blood samples were collected in Tempus tubes and stored at the Norwegian Institute of Public Health Biobank for future use. In this study, we systematically evaluated and compared five blood-RNA isolation protocols: three blood-RNA isolation protocols optimized for simultaneous isolation of all blood-RNA species (MagMAX RNA Isolation Kit, both manual and semi-automated protocols; and Norgen Preserved Blood RNA kit I); and two protocols optimized for large RNAs only (Tempus Spin RNA, and Tempus 6-port isolation kit). We estimated the following parameters: RNA quality, RNA yield, processing time, cost per sample, and RNA transcript stability of six selected mRNAs and 13 miRNAs using real-time qPCR. Whole blood samples from adults (n = 59 tubes) and umbilical cord blood (n = 18 tubes) samples collected in Tempus tubes were analyzed. High-quality blood-RNAs with average RIN-values above seven were extracted using all five RNA isolation protocols. The transcript levels of the six selected genes showed minimal variation between the five protocols. Unexplained differences within the transcript levels of the 13 miRNA were observed; however, the 13 miRNAs had similar expression direction and they were within the same order of magnitude. Some differences in the RNA processing time and cost were noted. Sufficient amounts of high-quality RNA were obtained using all five protocols, and the Tempus blood RNA system therefore seems not to be dependent on one specific RNA isolation method.

  8. Increased body mass index predicts severity of asthma symptoms but not objective asthma traits in a large sample of asthmatics

    DEFF Research Database (Denmark)

    Bildstrup, Line; Backer, Vibeke; Thomsen, Simon Francis

    2015-01-01

    AIM: To examine the relationship between body mass index (BMI) and different indicators of asthma severity in a large community-based sample of Danish adolescents and adults. METHODS: A total of 1186 subjects, 14-44 years of age, who in a screening questionnaire had reported a history of airway...... symptoms suggestive of asthma and/or allergy, or who were taking any medication for these conditions were clinically examined. All participants were interviewed about respiratory symptoms and furthermore height and weight, skin test reactivity, lung function, and airway responsiveness were measured...

  9. An HI selected sample of galaxies : The HI mass function and the surface brightness distribution

    NARCIS (Netherlands)

    Zwaan, M; Briggs, F; Sprayberry, D

    Results from the Arecibo HI Strip Survey, an unbiased extragalactic HI survey, combined with optical and 21 cm follow-up observations, determine the HI mass function and the cosmological mass density of HI at the present epoch. Both are consistent with earlier estimates, computed for the population

  10. Examining the interrater reliability of the Hare Psychopathy Checklist-Revised across a large sample of trained raters.

    Science.gov (United States)

    Blais, Julie; Forth, Adelle E; Hare, Robert D

    2017-06-01

    The goal of the current study was to assess the interrater reliability of the Psychopathy Checklist-Revised (PCL-R) among a large sample of trained raters (N = 280). All raters completed PCL-R training at some point between 1989 and 2012 and subsequently provided complete coding for the same 6 practice cases. Overall, 3 major conclusions can be drawn from the results: (a) reliability of individual PCL-R items largely fell below any appropriate standards while the estimates for Total PCL-R scores and factor scores were good (but not excellent); (b) the cases representing individuals with high psychopathy scores showed better reliability than did the cases of individuals in the moderate to low PCL-R score range; and (c) there was a high degree of variability among raters; however, rater specific differences had no consistent effect on scoring the PCL-R. Therefore, despite low reliability estimates for individual items, Total scores and factor scores can be reliably scored among trained raters. We temper these conclusions by noting that scoring standardized videotaped case studies does not allow the rater to interact directly with the offender. Real-world PCL-R assessments typically involve a face-to-face interview and much more extensive collateral information. We offer recommendations for new web-based training procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Discovery and prioritization of somatic mutations in diffuse large B-cell lymphoma (DLBCL) by whole-exome sequencing

    Science.gov (United States)

    Lohr, Jens G.; Stojanov, Petar; Lawrence, Michael S.; Auclair, Daniel; Chapuy, Bjoern; Sougnez, Carrie; Cruz-Gordillo, Peter; Knoechel, Birgit; Asmann, Yan W.; Slager, Susan L.; Novak, Anne J.; Dogan, Ahmet; Ansell, Stephen M.; Zou, Lihua; Gould, Joshua; Saksena, Gordon; Stransky, Nicolas; Rangel-Escareño, Claudia; Fernandez-Lopez, Juan Carlos; Hidalgo-Miranda, Alfredo; Melendez-Zajgla, Jorge; Hernández-Lemus, Enrique; Schwarz-Cruz y Celis, Angela; Imaz-Rosshandler, Ivan; Ojesina, Akinyemi I.; Jung, Joonil; Pedamallu, Chandra S.; Lander, Eric S.; Habermann, Thomas M.; Cerhan, James R.; Shipp, Margaret A.; Getz, Gad; Golub, Todd R.

    2012-01-01

    To gain insight into the genomic basis of diffuse large B-cell lymphoma (DLBCL), we performed massively parallel whole-exome sequencing of 55 primary tumor samples from patients with DLBCL and matched normal tissue. We identified recurrent mutations in genes that are well known to be functionally relevant in DLBCL, including MYD88, CARD11, EZH2, and CREBBP. We also identified somatic mutations in genes for which a functional role in DLBCL has not been previously suspected. These genes include MEF2B, MLL2, BTG1, GNA13, ACTB, P2RY8, PCLO, and TNFRSF14. Further, we show that BCL2 mutations commonly occur in patients with BCL2/IgH rearrangements as a result of somatic hypermutation normally occurring at the IgH locus. The BCL2 point mutations are primarily synonymous, and likely caused by activation-induced cytidine deaminase–mediated somatic hypermutation, as shown by comprehensive analysis of enrichment of mutations in WRCY target motifs. Those nonsynonymous mutations that are observed tend to be found outside of the functionally important BH domains of the protein, suggesting that strong negative selection against BCL2 loss-of-function mutations is at play. Last, by using an algorithm designed to identify likely functionally relevant but infrequent mutations, we identify KRAS, BRAF, and NOTCH1 as likely drivers of DLBCL pathogenesis in some patients. Our data provide an unbiased view of the landscape of mutations in DLBCL, and this in turn may point toward new therapeutic strategies for the disease. PMID:22343534

  12. Sample Return Robot

    Data.gov (United States)

    National Aeronautics and Space Administration — This Challenge requires demonstration of an autonomous robotic system to locate and collect a set of specific sample types from a large planetary analog area and...

  13. The Depression Anxiety Stress Scales (DASS): normative data and latent structure in a large non-clinical sample.

    Science.gov (United States)

    Crawford, John R; Henry, Julie D

    2003-06-01

    To provide UK normative data for the Depression Anxiety and Stress Scale (DASS) and test its convergent, discriminant and construct validity. Cross-sectional, correlational and confirmatory factor analysis (CFA). The DASS was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,771) in terms of demographic variables. Competing models of the latent structure of the DASS were derived from theoretical and empirical sources and evaluated using confirmatory factor analysis. Correlational analysis was used to determine the influence of demographic variables on DASS scores. The convergent and discriminant validity of the measure was examined through correlating the measure with two other measures of depression and anxiety (the HADS and the sAD), and a measure of positive and negative affectivity (the PANAS). The best fitting model (CFI =.93) of the latent structure of the DASS consisted of three correlated factors corresponding to the depression, anxiety and stress scales with correlated error permitted between items comprising the DASS subscales. Demographic variables had only very modest influences on DASS scores. The reliability of the DASS was excellent, and the measure possessed adequate convergent and discriminant validity Conclusions: The DASS is a reliable and valid measure of the constructs it was intended to assess. The utility of this measure for UK clinicians is enhanced by the provision of large sample normative data.

  14. Human blood RNA stabilization in samples collected and transported for a large biobank

    Science.gov (United States)

    2012-01-01

    Background The Norwegian Mother and Child Cohort Study (MoBa) is a nation-wide population-based pregnancy cohort initiated in 1999, comprising more than 108.000 pregnancies recruited between 1999 and 2008. In this study we evaluated the feasibility of integrating RNA analyses into existing MoBa protocols. We compared two different blood RNA collection tube systems – the PAXgene™ Blood RNA system and the Tempus™ Blood RNA system - and assessed the effects of suboptimal blood volumes in collection tubes and of transportation of blood samples by standard mail. Endpoints to characterize the samples were RNA quality and yield, and the RNA transcript stability of selected genes. Findings High-quality RNA could be extracted from blood samples stabilized with both PAXgene and Tempus tubes. The RNA yields obtained from the blood samples collected in Tempus tubes were consistently higher than from PAXgene tubes. Higher RNA yields were obtained from cord blood (3 – 4 times) compared to adult blood with both types of tubes. Transportation of samples by standard mail had moderate effects on RNA quality and RNA transcript stability; the overall RNA quality of the transported samples was high. Some unexplained changes in gene expression were noted, which seemed to correlate with suboptimal blood volumes collected in the tubes. Temperature variations during transportation may also be of some importance. Conclusions Our results strongly suggest that special collection tubes are necessary for RNA stabilization and they should be used for establishing new biobanks. We also show that the 50,000 samples collected in the MoBa biobank provide RNA of high quality and in sufficient amounts to allow gene expression analyses for studying the association of disease with altered patterns of gene expression. PMID:22988904

  15. Generalized sampling in Julia

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Nielsen, Morten; Rasmussen, Morten Grud

    2017-01-01

    Generalized sampling is a numerically stable framework for obtaining reconstructions of signals in different bases and frames from their samples. For example, one can use wavelet bases for reconstruction given frequency measurements. In this paper, we will introduce a carefully documented toolbox...... for performing generalized sampling in Julia. Julia is a new language for technical computing with focus on performance, which is ideally suited to handle the large size problems often encountered in generalized sampling. The toolbox provides specialized solutions for the setup of Fourier bases and wavelets....... The performance of the toolbox is compared to existing implementations of generalized sampling in MATLAB....

  16. Thinking about dying and trying and intending to die: results on suicidal behavior from a large Web-based sample.

    Science.gov (United States)

    de Araújo, Rafael M F; Mazzochi, Leonardo; Lara, Diogo R; Ottoni, Gustavo L

    2015-03-01

    Suicide is an important worldwide public health problem. The aim of this study was to characterize risk factors of suicidal behavior using a large Web-based sample. The data were collected by the Brazilian Internet Study on Temperament and Psychopathology (BRAINSTEP) from November 2010 to July 2011. Suicidal behavior was assessed by an instrument based on the Suicidal Behaviors Questionnaire. The final sample consisted of 48,569 volunteers (25.9% men) with a mean ± SD age of 30.7 ± 10.1 years. More than 60% of the sample reported having had at least a passing thought of killing themselves, and 6.8% of subjects had previously attempted suicide (64% unplanned). The demographic features with the highest risk of attempting suicide were female gender (OR = 1.82, 95% CI = 1.65 to 2.00); elementary school as highest education level completed (OR = 2.84, 95% CI = 2.48 to 3.25); being unable to work (OR = 5.32, 95% CI = 4.15 to 6.81); having no religion (OR = 2.08, 95% CI = 1.90 to 2.29); and, only for female participants, being married (OR = 1.19, 95% CI = 1.08 to 1.32) or divorced (OR = 1.66, 95% CI = 1.41 to 1.96). A family history of a suicide attempt and of a completed suicide showed the same increment in the risk of suicidal behavior. The higher the number of suicide attempts, the higher was the real intention to die (P < .05). Those who really wanted to die reported more emptiness/loneliness (OR = 1.58, 95% CI = 1.35 to 1.85), disconnection (OR = 1.54, 95% CI = 1.30 to 1.81), and hopelessness (OR = 1.74, 95% CI = 1.49 to 2.03), but their methods were not different from the methods of those who did not mean to die. This large Web survey confirmed results from previous studies on suicidal behavior and pointed out the relevance of the number of previous suicide attempts and of a positive family history, even for a noncompleted suicide, as important risk factors. © Copyright 2015 Physicians Postgraduate Press, Inc.

  17. 'Intelligent' approach to radioimmunoassay sample counting employing a microprocessor controlled sample counter

    International Nuclear Information System (INIS)

    Ekins, R.P.; Sufi, S.; Malan, P.G.

    1977-01-01

    The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore particularly imperitive that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. The majority of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be releted to the counting errors for that sample. It is the objective of this presentation to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5-10 fold to be made. (orig.) [de

  18. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  19. A direct observation method for auditing large urban centers using stratified sampling, mobile GIS technology and virtual environments.

    Science.gov (United States)

    Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth

    2017-02-16

    With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.

  20. Nutritional status and dental caries in a large sample of 4- and 5 ...

    African Journals Online (AJOL)

    Background. Evidence from studies involving small samples of children in Africa, India and South America suggests a higher dental caries rate in malnourished children. A comparison was done to evaluate wasting and stunting and their association with dental caries in four samples of South African children. Design.

  1. Evaluation of Respondent-Driven Sampling

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda Ndagire; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Background Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex-workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total-population data. Methods Total-population data on age, tribe, religion, socioeconomic status, sexual activity and HIV status were available on a population of 2402 male household-heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, employing current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). Results We recruited 927 household-heads. Full and small RDS samples were largely representative of the total population, but both samples under-represented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven-sampling statistical-inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven-sampling bootstrap 95% confidence intervals included the population proportion. Conclusions Respondent-driven sampling produced a generally representative sample of this well-connected non-hidden population. However, current respondent-driven-sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience-sampling

  2. Evaluation of respondent-driven sampling.

    Science.gov (United States)

    McCreesh, Nicky; Frost, Simon D W; Seeley, Janet; Katongole, Joseph; Tarsh, Matilda N; Ndunguse, Richard; Jichi, Fatima; Lunel, Natasha L; Maher, Dermot; Johnston, Lisa G; Sonnenberg, Pam; Copas, Andrew J; Hayes, Richard J; White, Richard G

    2012-01-01

    Respondent-driven sampling is a novel variant of link-tracing sampling for estimating the characteristics of hard-to-reach groups, such as HIV prevalence in sex workers. Despite its use by leading health organizations, the performance of this method in realistic situations is still largely unknown. We evaluated respondent-driven sampling by comparing estimates from a respondent-driven sampling survey with total population data. Total population data on age, tribe, religion, socioeconomic status, sexual activity, and HIV status were available on a population of 2402 male household heads from an open cohort in rural Uganda. A respondent-driven sampling (RDS) survey was carried out in this population, using current methods of sampling (RDS sample) and statistical inference (RDS estimates). Analyses were carried out for the full RDS sample and then repeated for the first 250 recruits (small sample). We recruited 927 household heads. Full and small RDS samples were largely representative of the total population, but both samples underrepresented men who were younger, of higher socioeconomic status, and with unknown sexual activity and HIV status. Respondent-driven sampling statistical inference methods failed to reduce these biases. Only 31%-37% (depending on method and sample size) of RDS estimates were closer to the true population proportions than the RDS sample proportions. Only 50%-74% of respondent-driven sampling bootstrap 95% confidence intervals included the population proportion. Respondent-driven sampling produced a generally representative sample of this well-connected nonhidden population. However, current respondent-driven sampling inference methods failed to reduce bias when it occurred. Whether the data required to remove bias and measure precision can be collected in a respondent-driven sampling survey is unresolved. Respondent-driven sampling should be regarded as a (potentially superior) form of convenience sampling method, and caution is required

  3. Genetic Influences on Pulmonary Function: A Large Sample Twin Study

    DEFF Research Database (Denmark)

    Ingebrigtsen, Truls S; Thomsen, Simon F; van der Sluis, Sophie

    2011-01-01

    Heritability of forced expiratory volume in one second (FEV(1)), forced vital capacity (FVC), and peak expiratory flow (PEF) has not been previously addressed in large twin studies. We evaluated the genetic contribution to individual differences observed in FEV(1), FVC, and PEF using data from...... the largest population-based twin study on spirometry. Specially trained lay interviewers with previous experience in spirometric measurements tested 4,314 Danish twins (individuals), 46-68 years of age, in their homes using a hand-held spirometer, and their flow-volume curves were evaluated. Modern variance...

  4. Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey

    Directory of Open Access Journals (Sweden)

    Steven R. Corman

    2013-12-01

    Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.

  5. Large Country-Lot Quality Assurance Sampling : A New Method for Rapid Monitoring and Evaluation of Health, Nutrition and Population Programs at Sub-National Levels

    OpenAIRE

    Hedt, Bethany L.; Olives, Casey; Pagano, Marcello; Valadez, Joseph J.

    2008-01-01

    Sampling theory facilitates development of economical, effective and rapid measurement of a population. While national policy maker value survey results measuring indicators representative of a large area (a country, state or province), measurement in smaller areas produces information useful for managers at the local level. It is often not possible to disaggregate a national survey to obt...

  6. HAMMER: Reweighting tool for simulated data samples

    CERN Document Server

    Duell, Stephan; Ligeti, Zoltan; Papucci, Michele; Robinson, Dean

    2016-01-01

    Modern flavour physics experiments, such as Belle II or LHCb, require large samples of generated Monte Carlo events. Monte Carlo events often are processed in a sophisticated chain that includes a simulation of the detector response. The generation and reconstruction of large samples is resource-intensive and in principle would need to be repeated if e.g. parameters responsible for the underlying models change due to new measurements or new insights. To avoid having to regenerate large samples, we work on a tool, The Helicity Amplitude Module for Matrix Element Reweighting (HAMMER), which allows one to easily reweight existing events in the context of semileptonic b → q ` ̄ ν ` analyses to new model parameters or new physics scenarios.

  7. Solid phase extraction of large volume of water and beverage samples to improve detection limits for GC-MS analysis of bisphenol A and four other bisphenols.

    Science.gov (United States)

    Cao, Xu-Liang; Popovic, Svetlana

    2018-01-01

    Solid phase extraction (SPE) of large volumes of water and beverage products was investigated for the GC-MS analysis of bisphenol A (BPA), bisphenol AF (BPAF), bisphenol F (BPF), bisphenol E (BPE), and bisphenol B (BPB). While absolute recoveries of the method were improved for water and some beverage products (e.g. diet cola, iced tea), breakthrough may also have occurred during SPE of 200 mL of other beverages (e.g. BPF in cola). Improvements in method detection limits were observed with the analysis of large sample volumes for all bisphenols at ppt (pg/g) to sub-ppt levels. This improvement was found to be proportional to sample volumes for water and beverage products with less interferences and noise levels around the analytes. Matrix effects and interferences were observed during SPE of larger volumes (100 and 200 mL) of the beverage products, and affected the accurate analysis of BPF. This improved method was used to analyse bisphenols in various beverage samples, and only BPA was detected, with levels ranging from 0.022 to 0.030 ng/g for products in PET bottles, and 0.085 to 0.32 ng/g for products in cans.

  8. Sampling methods and non-destructive examination techniques for large radioactive waste packages

    International Nuclear Information System (INIS)

    Green, T.H.; Smith, D.L.; Burgoyne, K.E.; Maxwell, D.J.; Norris, G.H.; Billington, D.M.; Pipe, R.G.; Smith, J.E.; Inman, C.M.

    1992-01-01

    Progress is reported on work undertaken to evaluate quality checking methods for radioactive wastes. A sampling rig was designed, fabricated and used to develop techniques for the destructive sampling of cemented simulant waste using remotely operated equipment. An engineered system for the containment of cooling water was designed and manufactured and successfully demonstrated with the drum and coring equipment mounted in both vertical and horizontal orientations. The preferred in-cell orientation was found to be with the drum and coring machinery mounted in a horizontal position. Small powdered samples can be taken from cemented homogeneous waste cores using a hollow drill/vacuum section technique with the preferred subsampling technique being to discard the outer 10 mm layer to obtain a representative sample of the cement core. Cement blends can be dissolved using fusion techniques and the resulting solutions are stable to gelling for periods in excess of one year. Although hydrochloric acid and nitric acid are promising solvents for dissolution of cement blends, the resultant solutions tend to form silicic acid gels. An estimate of the beta-emitter content of cemented waste packages can be obtained by a combination of non-destructive and destructive techniques. The errors will probably be in excess of +/-60 % at the 95 % confidence level. Real-time X-ray video-imaging techniques have been used to analyse drums of uncompressed, hand-compressed, in-drum compacted and high-force compacted (i.e. supercompacted) simulant waste. The results have confirmed the applicability of this technique for NDT of low-level waste. 8 refs., 12 figs., 3 tabs

  9. Large-sample neutron activation analysis in mass balance and nutritional studies

    NARCIS (Netherlands)

    van de Wiel, A.; Blaauw, Menno

    2018-01-01

    Low concentrations of elements in food can be measured with various techniques, mostly in small samples (mg). These techniques provide only reliable data when the element is distributed homogeneously in the material to be analysed either naturally or after a homogenisation procedure. When this is

  10. Evaluating hypotheses in geolocation on a very large sample of Twitter

    DEFF Research Database (Denmark)

    Salehi, Bahar; Søgaard, Anders

    2017-01-01

    Recent work in geolocation has madeseveral hypotheses about what linguisticmarkers are relevant to detect where peoplewrite from. In this paper, we examinesix hypotheses against a corpus consistingof all geo-tagged tweets from theUS, or whose geo-tags could be inferred,in a 19% sample of Twitter...

  11. Reinforced dynamics for enhanced sampling in large atomic and molecular systems

    Science.gov (United States)

    Zhang, Linfeng; Wang, Han; E, Weinan

    2018-03-01

    A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.

  12. Fluid sample collection and distribution system. [qualitative analysis of aqueous samples from several points

    Science.gov (United States)

    Brooks, R. L. (Inventor)

    1979-01-01

    A multipoint fluid sample collection and distribution system is provided wherein the sample inputs are made through one or more of a number of sampling valves to a progressive cavity pump which is not susceptible to damage by large unfiltered particles. The pump output is through a filter unit that can provide a filtered multipoint sample. An unfiltered multipoint sample is also provided. An effluent sample can be taken and applied to a second progressive cavity pump for pumping to a filter unit that can provide one or more filtered effluent samples. The second pump can also provide an unfiltered effluent sample. Means are provided to periodically back flush each filter unit without shutting off the whole system.

  13. Broad phylogenomic sampling and the sister lineage of land plants.

    Directory of Open Access Journals (Sweden)

    Ruth E Timme

    Full Text Available The tremendous diversity of land plants all descended from a single charophyte green alga that colonized the land somewhere between 430 and 470 million years ago. Six orders of charophyte green algae, in addition to embryophytes, comprise the Streptophyta s.l. Previous studies have focused on reconstructing the phylogeny of organisms tied to this key colonization event, but wildly conflicting results have sparked a contentious debate over which lineage gave rise to land plants. The dominant view has been that 'stoneworts,' or Charales, are the sister lineage, but an alternative hypothesis supports the Zygnematales (often referred to as "pond scum" as the sister lineage. In this paper, we provide a well-supported, 160-nuclear-gene phylogenomic analysis supporting the Zygnematales as the closest living relative to land plants. Our study makes two key contributions to the field: 1 the use of an unbiased method to collect a large set of orthologs from deeply diverging species and 2 the use of these data in determining the sister lineage to land plants. We anticipate this updated phylogeny not only will hugely impact lesson plans in introductory biology courses, but also will provide a solid phylogenetic tree for future green-lineage research, whether it be related to plants or green algae.

  14. Predictive Value of Callous-Unemotional Traits in a Large Community Sample

    Science.gov (United States)

    Moran, Paul; Rowe, Richard; Flach, Clare; Briskman, Jacqueline; Ford, Tamsin; Maughan, Barbara; Scott, Stephen; Goodman, Robert

    2009-01-01

    Objective: Callous-unemotional (CU) traits in children and adolescents are increasingly recognized as a distinctive dimension of prognostic importance in clinical samples. Nevertheless, comparatively little is known about the longitudinal effects of these personality traits on the mental health of young people from the general population. Using a…

  15. A rop net and removable walkway used to quantitatively sample fishes over wetland surfaces in the dwarf mangrove of the Southern Everglades

    Science.gov (United States)

    Lorenz, J.J.; McIvor, C.C.; Powell, G.V.N.; Frederick, P.C.

    1997-01-01

    We describe a 9 m2 drop net and removable walkways designed to quantify densities of small fishes in wetland habitats with low to moderate vegetation density. The method permits the collection of small, quantitative, discrete samples in ecologically sensitive areas by combining rapid net deployment from fixed sites with the carefully contained use of the fish toxicant rotenone. This method requires very little contact with the substrate, causes minimal alteration to the habitat being sampled, samples small fishes in an unbiased manner, and allows for differential sampling of microhabitats within a wetland. When used in dwarf red mangrove (Rhizophora mangle) habitat in southern Everglades National Park and adjacent areas (September 1990 to March 1993), we achieved high recovery efficiencies (78–90%) for five common species <110 mm in length. We captured 20,193 individuals of 26 species. The most abundant fishes were sheepshead minnowCyprinodon variegatus, goldspotted killifishFloridichthys carpio, rainwater killifishLucania parva, sailfin mollyPoecilia latipinna, and the exotic Mayan cichlidCichlasoma urophthalmus. The 9 m2 drop net and associated removable walkways are versatile and can be used in a variety of wetland types, including both interior and coastal wetlands with either herbaceous or woody vegetation.

  16. Some connections between importance sampling and enhanced sampling methods in molecular dynamics.

    Science.gov (United States)

    Lie, H C; Quer, J

    2017-11-21

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  17. Scanning tunneling spectroscopy under large current flow through the sample.

    Science.gov (United States)

    Maldonado, A; Guillamón, I; Suderow, H; Vieira, S

    2011-07-01

    We describe a method to make scanning tunneling microscopy/spectroscopy imaging at very low temperatures while driving a constant electric current up to some tens of mA through the sample. It gives a new local probe, which we term current driven scanning tunneling microscopy/spectroscopy. We show spectroscopic and topographic measurements under the application of a current in superconducting Al and NbSe(2) at 100 mK. Perspective of applications of this local imaging method includes local vortex motion experiments, and Doppler shift local density of states studies.

  18. Methods of pre-concentration of radionuclides from large volume samples

    International Nuclear Information System (INIS)

    Olahova, K.; Matel, L.; Rosskopfova, O.

    2006-01-01

    The development of radioanalytical methods for low level radionuclides in environmental samples is presented. In particular, emphasis is placed on the introduction of extraction chromatography as a tool for improving the quality of results as well as reducing the analysis time. However, the advantageous application of extraction chromatography often depends on the effective use of suitable preconcentration techniques, such as co-precipitation, to reduce the amount of matrix components which accompany the analysis interest. On-going investigations in this field relevant to the determination of environmental levels of actinides and 90 Sr are discussed. (authors)

  19. Sampling or gambling

    Energy Technology Data Exchange (ETDEWEB)

    Gy, P.M.

    1981-12-01

    Sampling can be compared to no other technique. A mechanical sampler must above all be selected according to its aptitude for supressing or reducing all components of the sampling error. Sampling is said to be correct when it gives all elements making up the batch of matter submitted to sampling an uniform probability of being selected. A sampler must be correctly designed, built, installed, operated and maintained. When the conditions of sampling correctness are not strictly respected, the sampling error can no longer be controlled and can, unknown to the user, be unacceptably large: the sample is no longer representative. The implementation of an incorrect sampler is a form of gambling and this paper intends to show that at this game the user is nearly always the loser in the long run. The users' and the manufacturers' interests may diverge and the standards which should safeguard the users' interests very often fail to do so by tolerating or even recommending incorrect techniques such as the implementation of too narrow cutters traveling too fast through the stream to be sampled.

  20. Evaluation of Sampling Methods for Bacillus Spore ...

    Science.gov (United States)

    Journal Article Following a wide area release of biological materials, mapping the extent of contamination is essential for orderly response and decontamination operations. HVAC filters process large volumes of air and therefore collect highly representative particulate samples in buildings. HVAC filter extraction may have great utility in rapidly estimating the extent of building contamination following a large-scale incident. However, until now, no studies have been conducted comparing the two most appropriate sampling approaches for HVAC filter materials: direct extraction and vacuum-based sampling.