WorldWideScience

Sample records for large unbiased sample

  1. Unbiased Sampling and Meshing of Isosurfaces

    KAUST Repository

    Yan, Dongming

    2014-05-07

    In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.

  2. Unbiased Sampling and Meshing of Isosurfaces

    KAUST Repository

    Yan, Dongming; Wallner, Johannes; Wonka, Peter

    2014-01-01

    In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.

  3. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  4. A new unbiased stochastic derivative estimator for discontinuous sample performances with structural parameters

    NARCIS (Netherlands)

    Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd

    In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)

  5. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  6. Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris

    Science.gov (United States)

    Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey

    2005-01-01

    Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...

  7. Design unbiased estimation in line intersect sampling using segmented transects

    Science.gov (United States)

    David L.R. Affleck; Timothy G. Gregoire; Harry T. Valentine; Harry T. Valentine

    2005-01-01

    In many applications of line intersect sampling. transects consist of multiple, connected segments in a prescribed configuration. The relationship between the transect configuration and the selection probability of a population element is illustrated and a consistent sampling protocol, applicable to populations composed of arbitrarily shaped elements, is proposed. It...

  8. Efficient Unbiased Rendering using Enlightened Local Path Sampling

    DEFF Research Database (Denmark)

    Kristensen, Anders Wang

    measurements, which are the solution to the adjoint light transport problem. The second is a representation of the distribution of radiance and importance in the scene. We also derive a new method of particle sampling, which is advantageous compared to existing methods. Together we call the resulting algorithm....... The downside to using these algorithms is that they can be slow to converge. Due to the nature of Monte Carlo methods, the results are random variables subject to variance. This manifests itself as noise in the images, which can only be reduced by generating more samples. The reason these methods are slow...... is because of a lack of eeffective methods of importance sampling. Most global illumination algorithms are based on local path sampling, which is essentially a recipe for constructing random walks. Using this procedure paths are built based on information given explicitly as part of scene description...

  9. Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M

    2013-02-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Verifying mixing in dilution tunnels How to ensure cookstove emissions samples are unbiased

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Daniel L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rapp, Vi H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Caubel, Julien J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Sharon S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gadgil, Ashok J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-12-15

    A well-mixed diluted sample is essential for unbiased measurement of cookstove emissions. Most cookstove testing labs employ a dilution tunnel, also referred to as a “duct,” to mix clean dilution air with cookstove emissions before sampling. It is important that the emissions be well-mixed and unbiased at the sampling port so that instruments can take representative samples of the emission plume. Some groups have employed mixing baffles to ensure the gaseous and aerosol emissions from cookstoves are well-mixed before reaching the sampling location [2, 4]. The goal of these baffles is to to dilute and mix the emissions stream with the room air entering the fume hood by creating a local zone of high turbulence. However, potential drawbacks of mixing baffles include increased flow resistance (larger blowers needed for the same exhaust flow), nuisance cleaning of baffles as soot collects, and, importantly, the potential for loss of PM2.5 particles on the baffles themselves, thus biasing results. A cookstove emission monitoring system with baffles will collect particles faster than the duct’s walls alone. This is mostly driven by the available surface area for deposition by processes of Brownian diffusion (through the boundary layer) and turbophoresis (i.e. impaction). The greater the surface area available for diffusive and advection-driven deposition to occur, the greater the particle loss will be at the sampling port. As a layer of larger particle “fuzz” builds on the mixing baffles, even greater PM2.5 loss could occur. The micro structure of the deposited aerosol will lead to increased rates of particle loss by interception and a tendency for smaller particles to deposit due to impaction on small features of the micro structure. If the flow stream could be well-mixed without the need for baffles, these drawbacks could be avoided and the cookstove emissions sampling system would be more robust.

  11. Large-scale Reconstructions and Independent, Unbiased Clustering Based on Morphological Metrics to Classify Neurons in Selective Populations.

    Science.gov (United States)

    Bragg, Elise M; Briggs, Farran

    2017-02-15

    This protocol outlines large-scale reconstructions of neurons combined with the use of independent and unbiased clustering analyses to create a comprehensive survey of the morphological characteristics observed among a selective neuronal population. Combination of these techniques constitutes a novel approach for the collection and analysis of neuroanatomical data. Together, these techniques enable large-scale, and therefore more comprehensive, sampling of selective neuronal populations and establish unbiased quantitative methods for describing morphologically unique neuronal classes within a population. The protocol outlines the use of modified rabies virus to selectively label neurons. G-deleted rabies virus acts like a retrograde tracer following stereotaxic injection into a target brain structure of interest and serves as a vehicle for the delivery and expression of EGFP in neurons. Large numbers of neurons are infected using this technique and express GFP throughout their dendrites, producing "Golgi-like" complete fills of individual neurons. Accordingly, the virus-mediated retrograde tracing method improves upon traditional dye-based retrograde tracing techniques by producing complete intracellular fills. Individual well-isolated neurons spanning all regions of the brain area under study are selected for reconstruction in order to obtain a representative sample of neurons. The protocol outlines procedures to reconstruct cell bodies and complete dendritic arborization patterns of labeled neurons spanning multiple tissue sections. Morphological data, including positions of each neuron within the brain structure, are extracted for further analysis. Standard programming functions were utilized to perform independent cluster analyses and cluster evaluations based on morphological metrics. To verify the utility of these analyses, statistical evaluation of a cluster analysis performed on 160 neurons reconstructed in the thalamic reticular nucleus of the thalamus

  12. Unbiased tensor-based morphometry: Improved robustness and sample size estimates for Alzheimer’s disease clinical trials

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.

    2013-01-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970

  13. Interplanetary scintillation observations of an unbiased sample of 90 Ooty occultation radio sources at 326.5 MHz

    International Nuclear Information System (INIS)

    Banhatti, D.G.; Ananthakrishnan, S.

    1989-01-01

    We present 327-MHz interplanetary scintillation (IPS) observations of an unbiased sample of 90 extragalactic radio sources selected from the ninth Ooty lunar occultation list. The sources are brighter than 0.75 Jy at 327 MHz and lie outside the galactic plane. We derive values, the fraction of scintillating flux density, and the equivalent Gaussian diameter for the scintillating structure. Various correlations are found between the observed parameters. In particular, the scintillating component weakens and broadens with increasing largest angular size, and stronger scintillators have more compact scintillating components. (author)

  14. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    Science.gov (United States)

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...

  16. Automatic sampling for unbiased and efficient stereological estimation using the proportionator in biological studies

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    Quantification of tissue properties is improved using the general proportionator sampling and estimation procedure: automatic image analysis and non-uniform sampling with probability proportional to size (PPS). The complete region of interest is partitioned into fields of view, and every field...... of view is given a weight (the size) proportional to the total amount of requested image analysis features in it. The fields of view sampled with known probabilities proportional to individual weight are the only ones seen by the observer who provides the correct count. Even though the image analysis...... cerebellum, total number of orexin positive neurons in transgenic mice brain, and estimating the absolute area and the areal fraction of β islet cells in dog pancreas.  The proportionator was at least eight times more efficient (precision and time combined) than traditional computer controlled sampling....

  17. IR Observations of a Complete Unbiased Sample of Bright Seyfert Galaxies

    Science.gov (United States)

    Malkan, Matthew; Bendo, George; Charmandaris, Vassilis; Smith, Howard; Spinoglio, Luigi; Tommasin, Silvia

    2008-03-01

    IR spectra will measure the 2 main energy-generating processes by which galactic nuclei shine: black hole accretion and star formation. Both of these play roles in galaxy evolution, and they appear connected. To obtain a complete sample of AGN, covering the range of luminosities and column-densities, we will combine 2 complete all-sky samples with complementary selections, minimally biased by dust obscuration: the 116 IRAS 12um AGN and the 41 Swift/BAT hard Xray AGN. These galaxies have been extensively studied across the entire EM spectrum. Herschel observations have been requested and will be synergistic with the Spitzer database. IRAC and MIPS imaging will allow us to separate the nuclear and galactic continua. We are completing full IR observations of the local AGN population, most of which have already been done. The only remaining observations we request are 10 IRS/HIRES, 57 MIPS-24 and 30 IRAC pointings. These high-quality observations of bright AGN in the bolometric-flux-limited samples should be completed, for the high legacy value of complete uniform datasets. We will measure quantitatively the emission at each wavelength arising from stars and from accretion in each galactic center. Since our complete samples come from flux-limited all-sky surveys in the IR and HX, we will calculate the bi-variate AGN and star formation Luminosity Functions for the local population of active galaxies, for comparison with higher redshifts.Our second aim is to understand the physical differences between AGN classes. This requires statistical comparisons of full multiwavelength observations of complete representative samples. If the difference between Sy1s and Sy2s is caused by orientation, their isotropic properties, including those of the surrounding galactic centers, should be similar. In contrast, if they are different evolutionary stages following a galaxy encounter, then we may find observational evidence that the circumnuclear ISM of Sy2s is relatively younger.

  18. A novel SNP analysis method to detect copy number alterations with an unbiased reference signal directly from tumor samples

    Directory of Open Access Journals (Sweden)

    LaFramboise William A

    2011-01-01

    Full Text Available Abstract Background Genomic instability in cancer leads to abnormal genome copy number alterations (CNA as a mechanism underlying tumorigenesis. Using microarrays and other technologies, tumor CNA are detected by comparing tumor sample CN to normal reference sample CN. While advances in microarray technology have improved detection of copy number alterations, the increase in the number of measured signals, noise from array probes, variations in signal-to-noise ratio across batches and disparity across laboratories leads to significant limitations for the accurate identification of CNA regions when comparing tumor and normal samples. Methods To address these limitations, we designed a novel "Virtual Normal" algorithm (VN, which allowed for construction of an unbiased reference signal directly from test samples within an experiment using any publicly available normal reference set as a baseline thus eliminating the need for an in-lab normal reference set. Results The algorithm was tested using an optimal, paired tumor/normal data set as well as previously uncharacterized pediatric malignant gliomas for which a normal reference set was not available. Using Affymetrix 250K Sty microarrays, we demonstrated improved signal-to-noise ratio and detected significant copy number alterations using the VN algorithm that were validated by independent PCR analysis of the target CNA regions. Conclusions We developed and validated an algorithm to provide a virtual normal reference signal directly from tumor samples and minimize noise in the derivation of the raw CN signal. The algorithm reduces the variability of assays performed across different reagent and array batches, methods of sample preservation, multiple personnel, and among different laboratories. This approach may be valuable when matched normal samples are unavailable or the paired normal specimens have been subjected to variations in methods of preservation.

  19. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  20. The Swift/BAT AGN Spectroscopic Survey. IX. The Clustering Environments of an Unbiased Sample of Local AGNs

    Science.gov (United States)

    Powell, M. C.; Cappelluti, N.; Urry, C. M.; Koss, M.; Finoguenov, A.; Ricci, C.; Trakhtenbrot, B.; Allevato, V.; Ajello, M.; Oh, K.; Schawinski, K.; Secrest, N.

    2018-05-01

    We characterize the environments of local accreting supermassive black holes by measuring the clustering of AGNs in the Swift/BAT Spectroscopic Survey (BASS). With 548 AGN in the redshift range 0.01 2MASS galaxies, and interpreting it via halo occupation distribution and subhalo-based models, we constrain the occupation statistics of the full sample, as well as in bins of absorbing column density and black hole mass. We find that AGNs tend to reside in galaxy group environments, in agreement with previous studies of AGNs throughout a large range of luminosity and redshift, and that on average they occupy their dark matter halos similar to inactive galaxies of comparable stellar mass. We also find evidence that obscured AGNs tend to reside in denser environments than unobscured AGNs, even when samples were matched in luminosity, redshift, stellar mass, and Eddington ratio. We show that this can be explained either by significantly different halo occupation distributions or statistically different host halo assembly histories. Lastly, we see that massive black holes are slightly more likely to reside in central galaxies than black holes of smaller mass.

  1. Spectroscopic Properties of Star-Forming Host Galaxies and Type Ia Supernova Hubble Residuals in a Nearly Unbiased Sample

    Energy Technology Data Exchange (ETDEWEB)

    D' Andrea, Chris B. [Univ. of Pennsylvania, Philadelphia, PA (United States); et al.

    2011-12-20

    We examine the correlation between supernova host galaxy properties and their residuals on the Hubble diagram. We use supernovae discovered during the Sloan Digital Sky Survey II - Supernova Survey, and focus on objects at a redshift of z < 0.15, where the selection effects of the survey are known to yield a complete Type Ia supernova sample. To minimize the bias in our analysis with respect to measured host-galaxy properties, spectra were obtained for nearly all hosts, spanning a range in magnitude of -23 < M_r < -17. In contrast to previous works that use photometric estimates of host mass as a proxy for global metallicity, we analyze host-galaxy spectra to obtain gas-phase metallicities and star-formation rates from host galaxies with active star formation. From a final sample of ~ 40 emission-line galaxies, we find that light-curve corrected Type Ia supernovae are ~ 0.1 magnitudes brighter in high-metallicity hosts than in low-metallicity hosts. We also find a significant (> 3{\\sigma}) correlation between the Hubble residuals of Type Ia supernovae and the specific star-formation rate of the host galaxy. We comment on the importance of supernova/host-galaxy correlations as a source of systematic bias in future deep supernova surveys.

  2. Mid-IR Properties of an Unbiased AGN Sample of the Local Universe. 1; Emission-Line Diagnostics

    Science.gov (United States)

    Weaver, K. A.; Melendez, M.; Muhotzky, R. F.; Kraemer, S.; Engle, K.; Malumuth. E.; Tueller, J.; Markwardt, C.; Berghea, C. T.; Dudik, R. P.; hide

    2010-01-01

    \\Ve compare mid-IR emission-lines properties, from high-resolution Spitzer IRS spectra of a statistically-complete hard X-ray (14-195 keV) selected sample of nearby (z < 0.05) AGN detected by the Burst Alert Telescope (BAT) aboard Swift. The luminosity distribution for the mid-infrared emission-lines, [O IV] 25.89 microns, [Ne II] 12.81 microns, [Ne III] 15.56 microns and [Ne V] 14.32 microns, and hard X-ray continuum show no differences between Seyfert 1 and Seyfert 2 populations, although six newly discovered BAT AGNs are shown to be under-luminous in [O IV], most likely the result of dust extinction in the host galaxy. The overall tightness of the mid-infrared correlations and BAT luminosities suggests that the emission lines primarily arise in gas ionized by the AGN. We also compared the mid-IR emission-lines in the BAT AGNs with those from published studies of star-forming galaxies and LINERs. We found that the BAT AGN fall into a distinctive region when comparing the [Ne III]/[Ne II] and the [O IV]/[Ne III] quantities. From this we found that sources that have been previously classified in the mid-infrared/optical as AGN have smaller emission line ratios than those found for the BAT AGNs, suggesting that, in our X-ray selected sample, the AGN represents the main contribution to the observed line emission. Overall, we present a different set of emission line diagnostics to distinguish between AGN and star forming galaxies that can be used as a tool to find new AGN.

  3. Mutually unbiased bases

    Indian Academy of Sciences (India)

    Mutually unbiased bases play an important role in quantum cryptography [2] and in the optimal determination of the density operator of an ensemble [3,4]. A density operator ρ in N-dimensions depends on N2 1 real quantities. With the help of MUB's, any such density operator can be encoded, in an optimal way, in terms of ...

  4. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  5. A review of methods for sampling large airborne particles and associated radioactivity

    International Nuclear Information System (INIS)

    Garland, J.A.; Nicholson, K.W.

    1990-01-01

    Radioactive particles, tens of μm or more in diameter, are unlikely to be emitted directly from nuclear facilities with exhaust gas cleansing systems, but may arise in the case of an accident or where resuspension from contaminated surfaces is significant. Such particles may dominate deposition and, according to some workers, may contribute to inhalation doses. Quantitative sampling of large airborne particles is difficult because of their inertia and large sedimentation velocities. The literature describes conditions for unbiased sampling and the magnitude of sampling errors for idealised sampling inlets in steady winds. However, few air samplers for outdoor use have been assessed for adequacy of sampling. Many size selective sampling methods are found in the literature but few are suitable at the low concentrations that are often encountered in the environment. A number of approaches for unbiased sampling of large particles have been found in the literature. Some are identified as meriting further study, for application in the measurement of airborne radioactivity. (author)

  6. Large sample neutron activation analysis of a reference inhomogeneous sample

    International Nuclear Information System (INIS)

    Vasilopoulou, T.; Athens National Technical University, Athens; Tzika, F.; Stamatelatos, I.E.; Koster-Ammerlaan, M.J.J.

    2011-01-01

    A benchmark experiment was performed for Neutron Activation Analysis (NAA) of a large inhomogeneous sample. The reference sample was developed in-house and consisted of SiO 2 matrix and an Al-Zn alloy 'inhomogeneity' body. Monte Carlo simulations were employed to derive appropriate correction factors for neutron self-shielding during irradiation as well as self-attenuation of gamma rays and sample geometry during counting. The large sample neutron activation analysis (LSNAA) results were compared against reference values and the trueness of the technique was evaluated. An agreement within ±10% was observed between LSNAA and reference elemental mass values, for all matrix and inhomogeneity elements except Samarium, provided that the inhomogeneity body was fully simulated. However, in cases that the inhomogeneity was treated as not known, the results showed a reasonable agreement for most matrix elements, while large discrepancies were observed for the inhomogeneity elements. This study provided a quantification of the uncertainties associated with inhomogeneity in large sample analysis and contributed to the identification of the needs for future development of LSNAA facilities for analysis of inhomogeneous samples. (author)

  7. Analysis of large soil samples for actinides

    Science.gov (United States)

    Maxwell, III; Sherrod, L [Aiken, SC

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  8. Large Sample Neutron Activation Analysis of Heterogeneous Samples

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Vasilopoulou, T.; Tzika, F.

    2018-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) technique was developed for non-destructive analysis of heterogeneous bulk samples. The technique incorporated collimated scanning and combining experimental measurements and Monte Carlo simulations for the identification of inhomogeneities in large volume samples and the correction of their effect on the interpretation of gamma-spectrometry data. Corrections were applied for the effect of neutron self-shielding, gamma-ray attenuation, geometrical factor and heterogeneous activity distribution within the sample. A benchmark experiment was performed to investigate the effect of heterogeneity on the accuracy of LSNAA. Moreover, a ceramic vase was analyzed as a whole demonstrating the feasibility of the technique. The LSNAA results were compared against results obtained by INAA and a satisfactory agreement between the two methods was observed. This study showed that LSNAA is a technique capable to perform accurate non-destructive, multi-elemental compositional analysis of heterogeneous objects. It also revealed the great potential of the technique for the analysis of precious objects and artefacts that need to be preserved intact and cannot be damaged for sampling purposes. (author)

  9. Markovian description of unbiased polymer translocation

    International Nuclear Information System (INIS)

    Mondaini, Felipe; Moriconi, L.

    2012-01-01

    We perform, with the help of cloud computing resources, extensive Langevin simulations which provide compelling evidence in favor of a general Markovian framework for unbiased three-dimensional polymer translocation. Our statistical analysis consists of careful evaluations of (i) two-point correlation functions of the translocation coordinate and (ii) the empirical probabilities of complete polymer translocation (taken as a function of the initial number of monomers on a given side of the membrane). We find good agreement with predictions derived from the Markov chain approach recently addressed in the literature by the present authors. -- Highlights: ► We investigate unbiased polymer translocation through membrane pores. ► Large statistical ensembles have been produced with the help of cloud computing resources. ► We evaluate the two-point correlation function of the translocation coordinate. ► We evaluate empirical probabilities for complete polymer translocation. ► Unbiased polymer translocation is described as a Markov stochastic process.

  10. Markovian description of unbiased polymer translocation

    Energy Technology Data Exchange (ETDEWEB)

    Mondaini, Felipe [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21945-970 Rio de Janeiro, RJ (Brazil); Centro Federal de Educação Tecnológica Celso Suckow da Fonseca, UnED Angra dos Reis, Angra dos Reis, 23953-030, RJ (Brazil); Moriconi, L., E-mail: moriconi@if.ufrj.br [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21945-970 Rio de Janeiro, RJ (Brazil)

    2012-10-01

    We perform, with the help of cloud computing resources, extensive Langevin simulations which provide compelling evidence in favor of a general Markovian framework for unbiased three-dimensional polymer translocation. Our statistical analysis consists of careful evaluations of (i) two-point correlation functions of the translocation coordinate and (ii) the empirical probabilities of complete polymer translocation (taken as a function of the initial number of monomers on a given side of the membrane). We find good agreement with predictions derived from the Markov chain approach recently addressed in the literature by the present authors. -- Highlights: ► We investigate unbiased polymer translocation through membrane pores. ► Large statistical ensembles have been produced with the help of cloud computing resources. ► We evaluate the two-point correlation function of the translocation coordinate. ► We evaluate empirical probabilities for complete polymer translocation. ► Unbiased polymer translocation is described as a Markov stochastic process.

  11. Large sample NAA facility and methodology development

    International Nuclear Information System (INIS)

    Roth, C.; Gugiu, D.; Barbos, D.; Datcu, A.; Aioanei, L.; Dobrea, D.; Taroiu, I. E.; Bucsa, A.; Ghinescu, A.

    2013-01-01

    A Large Sample Neutron Activation Analysis (LSNAA) facility has been developed at the TRIGA- Annular Core Pulsed Reactor (ACPR) operated by the Institute for Nuclear Research in Pitesti, Romania. The central irradiation cavity of the ACPR core can accommodate a large irradiation device. The ACPR neutron flux characteristics are well known and spectrum adjustment techniques have been successfully applied to enhance the thermal component of the neutron flux in the central irradiation cavity. An analysis methodology was developed by using the MCNP code in order to estimate counting efficiency and correction factors for the major perturbing phenomena. Test experiments, comparison with classical instrumental neutron activation analysis (INAA) methods and international inter-comparison exercise have been performed to validate the new methodology. (authors)

  12. Gibbs sampling on large lattice with GMRF

    Science.gov (United States)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  13. Entanglement in mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Wiesniak, M; Zeilinger, A [Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna (Austria); Paterek, T, E-mail: tomasz.paterek@nus.edu.sg [Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore (Singapore)

    2011-05-15

    One of the essential features of quantum mechanics is that most pairs of observables cannot be measured simultaneously. This phenomenon manifests itself most strongly when observables are related to mutually unbiased bases. In this paper, we shed some light on the connection between mutually unbiased bases and another essential feature of quantum mechanics, quantum entanglement. It is shown that a complete set of mutually unbiased bases of a bipartite system contains a fixed amount of entanglement, independent of the choice of the set. This has implications for entanglement distribution among the states of a complete set. In prime-squared dimensions we present an explicit experiment-friendly construction of a complete set with a particularly simple entanglement distribution. Finally, we describe the basic properties of mutually unbiased bases composed of product states only. The constructions are illustrated with explicit examples in low dimensions. We believe that the properties of entanglement in mutually unbiased bases may be one of the ingredients to be taken into account to settle the question of the existence of complete sets. We also expect that they will be relevant to applications of bases in the experimental realization of quantum protocols in higher-dimensional Hilbert spaces.

  14. Identification of relevant drugable targets in diffuse large B-cell lymphoma using a genome-wide unbiased CD20 guilt-by association approach

    NARCIS (Netherlands)

    de Jong, Mathilde R. W.; Visser, Lydia; Huls, Gerwin; Diepstra, Arjan; van Vugt, Marcel; Ammatuna, Emanuele; van Rijn, Rozemarijn S.; Vellenga, Edo; van den Berg, Anke; Fehrmann, Rudolf S. N.; van Meerten, Tom

    2018-01-01

    Forty percent of patients with diffuse large B-cell lymphoma (DLBCL) show resistant disease to standard chemotherapy (CHOP) in combination with the anti-CD20 monoclonal antibody rituximab (R). Although many new anti-cancer drugs were developed in the last years, it is unclear which of these drugs

  15. DISCLOSING THE RADIO LOUDNESS DISTRIBUTION DICHOTOMY IN QUASARS: AN UNBIASED MONTE CARLO APPROACH APPLIED TO THE SDSS-FIRST QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Balokovic, M. [Department of Astronomy, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Smolcic, V. [Argelander-Institut fuer Astronomie, Auf dem Hugel 71, D-53121 Bonn (Germany); Ivezic, Z. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Zamorani, G. [INAF-Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Schinnerer, E. [Max-Planck-Institut fuer Astronomie, Koenigstuhl 17, D-69117 Heidelberg (Germany); Kelly, B. C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States)

    2012-11-01

    We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 {+-} 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.

  16. DISCLOSING THE RADIO LOUDNESS DISTRIBUTION DICHOTOMY IN QUASARS: AN UNBIASED MONTE CARLO APPROACH APPLIED TO THE SDSS-FIRST QUASAR SAMPLE

    International Nuclear Information System (INIS)

    Baloković, M.; Smolčić, V.; Ivezić, Ž.; Zamorani, G.; Schinnerer, E.; Kelly, B. C.

    2012-01-01

    We investigate the dichotomy in the radio loudness distribution of quasars by modeling their radio emission and various selection effects using a Monte Carlo approach. The existence of two physically distinct quasar populations, the radio-loud and radio-quiet quasars, is controversial and over the last decade a bimodal distribution of radio loudness of quasars has been both affirmed and disputed. We model the quasar radio luminosity distribution with simple unimodal and bimodal distribution functions. The resulting simulated samples are compared to a fiducial sample of 8300 quasars drawn from the SDSS DR7 Quasar Catalog and combined with radio observations from the FIRST survey. Our results indicate that the SDSS-FIRST sample is best described by a radio loudness distribution which consists of two components, with (12 ± 1)% of sources in the radio-loud component. On the other hand, the evidence for a local minimum in the loudness distribution (bimodality) is not strong and we find that previous claims for its existence were probably affected by the incompleteness of the FIRST survey close to its faint limit. We also investigate the redshift and luminosity dependence of the radio loudness distribution and find tentative evidence that at high redshift radio-loud quasars were rarer, on average louder, and exhibited a smaller range in radio loudness. In agreement with other recent work, we conclude that the SDSS-FIRST sample strongly suggests that the radio loudness distribution of quasars is not a universal function, and that more complex models than presented here are needed to fully explain available observations.

  17. LOGISTICS OF ECOLOGICAL SAMPLING ON LARGE RIVERS

    Science.gov (United States)

    The objectives of this document are to provide an overview of the logistical problems associated with the ecological sampling of boatable rivers and to suggest solutions to those problems. It is intended to be used as a resource for individuals preparing to collect biological dat...

  18. Implementation of genomic recursions in single-step genomic best linear unbiased predictor for US Holsteins with a large number of genotyped animals.

    Science.gov (United States)

    Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J

    2016-03-01

    The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1

  19. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  20. UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY

    Directory of Open Access Journals (Sweden)

    Jean-Paul Jernot

    2011-05-01

    Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.

  1. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  2. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  3. Circulating tumor cell detection: A direct comparison between negative and unbiased enrichment in lung cancer.

    Science.gov (United States)

    Xu, Yan; Liu, Biao; Ding, Fengan; Zhou, Xiaodie; Tu, Pin; Yu, Bo; He, Yan; Huang, Peilin

    2017-06-01

    Circulating tumor cells (CTCs), isolated as a 'liquid biopsy', may provide important diagnostic and prognostic information. Therefore, rapid, reliable and unbiased detection of CTCs are required for routine clinical analyses. It was demonstrated that negative enrichment, an epithelial marker-independent technique for isolating CTCs, exhibits a better efficiency in the detection of CTCs compared with positive enrichment techniques that only use specific anti-epithelial cell adhesion molecules. However, negative enrichment techniques incur significant cell loss during the isolation procedure, and as it is a method that uses only one type of antibody, it is inherently biased. The detection procedure and identification of cell types also relies on skilled and experienced technicians. In the present study, the detection sensitivity of using negative enrichment and a previously described unbiased detection method was compared. The results revealed that unbiased detection methods may efficiently detect >90% of cancer cells in blood samples containing CTCs. By contrast, only 40-60% of CTCs were detected by negative enrichment. Additionally, CTCs were identified in >65% of patients with stage I/II lung cancer. This simple yet efficient approach may achieve a high level of sensitivity. It demonstrates a potential for the large-scale clinical implementation of CTC-based diagnostic and prognostic strategies.

  4. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  5. A spinner magnetometer for large Apollo lunar samples

    Science.gov (United States)

    Uehara, M.; Gattacceca, J.; Quesnel, Y.; Lepaulard, C.; Lima, E. A.; Manfredi, M.; Rochette, P.

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10-7 Am2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  6. A spinner magnetometer for large Apollo lunar samples.

    Science.gov (United States)

    Uehara, M; Gattacceca, J; Quesnel, Y; Lepaulard, C; Lima, E A; Manfredi, M; Rochette, P

    2017-10-01

    We developed a spinner magnetometer to measure the natural remanent magnetization of large Apollo lunar rocks in the storage vault of the Lunar Sample Laboratory Facility (LSLF) of NASA. The magnetometer mainly consists of a commercially available three-axial fluxgate sensor and a hand-rotating sample table with an optical encoder recording the rotation angles. The distance between the sample and the sensor is adjustable according to the sample size and magnetization intensity. The sensor and the sample are placed in a two-layer mu-metal shield to measure the sample natural remanent magnetization. The magnetic signals are acquired together with the rotation angle to obtain stacking of the measured signals over multiple revolutions. The developed magnetometer has a sensitivity of 5 × 10 -7 Am 2 at the standard sensor-to-sample distance of 15 cm. This sensitivity is sufficient to measure the natural remanent magnetization of almost all the lunar basalt and breccia samples with mass above 10 g in the LSLF vault.

  7. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  8. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  9. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  10. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  11. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  12. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  13. A course in mathematical statistics and large sample theory

    CERN Document Server

    Bhattacharya, Rabi; Patrangenaru, Victor

    2016-01-01

    This graduate-level textbook is primarily aimed at graduate students of statistics, mathematics, science, and engineering who have had an undergraduate course in statistics, an upper division course in analysis, and some acquaintance with measure theoretic probability. It provides a rigorous presentation of the core of mathematical statistics. Part I of this book constitutes a one-semester course on basic parametric mathematical statistics. Part II deals with the large sample theory of statistics — parametric and nonparametric, and its contents may be covered in one semester as well. Part III provides brief accounts of a number of topics of current interest for practitioners and other disciplines whose work involves statistical methods. Large Sample theory with many worked examples, numerical calculations, and simulations to illustrate theory Appendices provide ready access to a number of standard results, with many proofs Solutions given to a number of selected exercises from Part I Part II exercises with ...

  14. Within-subject template estimation for unbiased longitudinal image analysis.

    Science.gov (United States)

    Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce

    2012-07-16

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  16. Sampling of charged liquid radwaste stored in large tanks

    International Nuclear Information System (INIS)

    Tchemitcheff, E.; Domage, M.; Bernard-Bruls, X.

    1995-01-01

    The final safe disposal of radwaste, in France and elsewhere, entails, for liquid effluents, their conversion to a stable solid form, hence implying their conditioning. The production of conditioned waste with the requisite quality, traceability of the characteristics of the packages produced, and safe operation of the conditioning processes, implies at least the accurate knowledge of the chemical and radiochemical properties of the effluents concerned. The problem in sampling the normally charged effluents is aggravated for effluents that have been stored for several years in very large tanks, without stirring and retrieval systems. In 1992, SGN was asked by Cogema to study the retrieval and conditioning of LL/ML chemical sludge and spent ion-exchange resins produced in the operation of the UP2 400 plant at La Hague, and stored temporarily in rectangular silos and tanks. The sampling aspect was crucial for validating the inventories, identifying the problems liable to arise in the aging of the effluents, dimensioning the retrieval systems and checking the transferability and compatibility with the downstream conditioning process. Two innovative self-contained systems were developed and built for sampling operations, positioned above the tanks concerned. Both systems have been operated in active conditions and have proved totally satisfactory for taking representative samples. Today SGN can propose industrially proven overall solutions, adaptable to the various constraints of many spent fuel cycle operators

  17. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    Science.gov (United States)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  18. Matrix Sampling of Items in Large-Scale Assessments

    Directory of Open Access Journals (Sweden)

    Ruth A. Childs

    2003-07-01

    Full Text Available Matrix sampling of items -' that is, division of a set of items into different versions of a test form..-' is used by several large-scale testing programs. Like other test designs, matrixed designs have..both advantages and disadvantages. For example, testing time per student is less than if each..student received all the items, but the comparability of student scores may decrease. Also,..curriculum coverage is maintained, but reporting of scores becomes more complex. In this paper,..matrixed designs are compared with more traditional designs in nine categories of costs:..development costs, materials costs, administration costs, educational costs, scoring costs,..reliability costs, comparability costs, validity costs, and reporting costs. In choosing among test..designs, a testing program should examine the costs in light of its mandate(s, the content of the..tests, and the financial resources available, among other considerations.

  19. The ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES) . I. Project description, survey sample, and quality assessment

    Science.gov (United States)

    Cox, Nick L. J.; Cami, Jan; Farhang, Amin; Smoker, Jonathan; Monreal-Ibero, Ana; Lallement, Rosine; Sarre, Peter J.; Marshall, Charlotte C. M.; Smith, Keith T.; Evans, Christopher J.; Royer, Pierre; Linnartz, Harold; Cordiner, Martin A.; Joblin, Christine; van Loon, Jacco Th.; Foing, Bernard H.; Bhatt, Neil H.; Bron, Emeric; Elyajouri, Meriem; de Koter, Alex; Ehrenfreund, Pascale; Javadi, Atefeh; Kaper, Lex; Khosroshadi, Habib G.; Laverick, Mike; Le Petit, Franck; Mulas, Giacomo; Roueff, Evelyne; Salama, Farid; Spaans, Marco

    2017-10-01

    The carriers of the diffuse interstellar bands (DIBs) are largely unidentified molecules ubiquitously present in the interstellar medium (ISM). After decades of study, two strong and possibly three weak near-infrared DIBs have recently been attributed to the C60^+ fullerene based on observational and laboratory measurements. There is great promise for the identification of the over 400 other known DIBs, as this result could provide chemical hints towards other possible carriers. In an effort tosystematically study the properties of the DIB carriers, we have initiated a new large-scale observational survey: the ESO Diffuse Interstellar Bands Large Exploration Survey (EDIBLES). The main objective is to build on and extend existing DIB surveys to make a major step forward in characterising the physical and chemical conditions for a statistically significant sample of interstellar lines-of-sight, with the goal to reverse-engineer key molecular properties of the DIB carriers. EDIBLES is a filler Large Programme using the Ultraviolet and Visual Echelle Spectrograph at the Very Large Telescope at Paranal, Chile. It is designed to provide an observationally unbiased view of the presence and behaviour of the DIBs towards early-spectral-type stars whose lines-of-sight probe the diffuse-to-translucent ISM. Such a complete dataset will provide a deep census of the atomic and molecular content, physical conditions, chemical abundances and elemental depletion levels for each sightline. Achieving these goals requires a homogeneous set of high-quality data in terms of resolution (R 70 000-100 000), sensitivity (S/N up to 1000 per resolution element), and spectral coverage (305-1042 nm), as well as a large sample size (100+ sightlines). In this first paper the goals, objectives and methodology of the EDIBLES programme are described and an initial assessment of the data is provided.

  20. Gene coexpression measures in large heterogeneous samples using count statistics.

    Science.gov (United States)

    Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan

    2014-11-18

    With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.

  1. Waardenburg syndrome: Novel mutations in a large Brazilian sample.

    Science.gov (United States)

    Bocángel, Magnolia Astrid Pretell; Melo, Uirá Souto; Alves, Leandro Ucela; Pardono, Eliete; Lourenço, Naila Cristina Vilaça; Marcolino, Humberto Vicente Cezar; Otto, Paulo Alberto; Mingroni-Netto, Regina Célia

    2018-06-01

    This paper deals with the molecular investigation of Waardenburg syndrome (WS) in a sample of 49 clinically diagnosed probands (most from southeastern Brazil), 24 of them having the type 1 (WS1) variant (10 familial and 14 isolated cases) and 25 being affected by the type 2 (WS2) variant (five familial and 20 isolated cases). Sequential Sanger sequencing of all coding exons of PAX3, MITF, EDN3, EDNRB, SOX10 and SNAI2 genes, followed by CNV detection by MLPA of PAX3, MITF and SOX10 genes in selected cases revealed many novel pathogenic variants. Molecular screening, performed in all patients, revealed 19 causative variants (19/49 = 38.8%), six of them being large whole-exon deletions detected by MLPA, seven (four missense and three nonsense substitutions) resulting from single nucleotide substitutions (SNV), and six representing small indels. A pair of dizygotic affected female twins presented the c.430delC variant in SOX10, but the mutation, imputed to gonadal mosaicism, was not found in their unaffected parents. At least 10 novel causative mutations, described in this paper, were found in this Brazilian sample. Copy-number-variation detected by MLPA identified the causative mutation in 12.2% of our cases, corresponding to 31.6% of all causative mutations. In the majority of cases, the deletions were sporadic, since they were not present in the parents of isolated cases. Our results, as a whole, reinforce the fact that the screening of copy-number-variants by MLPA is a powerful tool to identify the molecular cause in WS patients. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  2. Sample-path large deviations in credit risk

    NARCIS (Netherlands)

    Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.

    2011-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a

  3. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  4. Cosmological implications of a large complete quasar sample.

    Science.gov (United States)

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  5. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  6. Large scale sample management and data analysis via MIRACLE

    DEFF Research Database (Denmark)

    Block, Ines; List, Markus; Pedersen, Marlene Lemvig

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. In the past years the technology advanced based on improved methods and protocols concerning sample preparation and printing, antibody selection, optimization...... of staining conditions and mode of signal analysis. However, the sample management and data analysis still poses challenges because of the high number of samples, sample dilutions, customized array patterns, and various programs necessary for array construction and data processing. We developed...... a comprehensive and user-friendly web application called MIRACLE (MIcroarray R-based Analysis of Complex Lysate Experiments), which bridges the gap between sample management and array analysis by conveniently keeping track of the sample information from lysate preparation, through array construction and signal...

  7. Application of Singh et al., unbiased estimator in a dual to ratio-cum ...

    African Journals Online (AJOL)

    This paper applied an unbiased estimator in a dual to ratio–cum-product estimator in sample surveys to double sampling design. Its efficiency over the conventional biased double sampling design estimator was determined based on the conditions attached to its supremacy. Three different data sets were used to testify to ...

  8. Crowdsourcing for large-scale mosquito (Diptera: Culicidae) sampling

    Science.gov (United States)

    Sampling a cosmopolitan mosquito (Diptera: Culicidae) species throughout its range is logistically challenging and extremely resource intensive. Mosquito control programmes and regional networks operate at the local level and often conduct sampling activities across much of North America. A method f...

  9. A large-scale cryoelectronic system for biological sample banking

    Science.gov (United States)

    Shirley, Stephen G.; Durst, Christopher H. P.; Fuchs, Christian C.; Zimmermann, Heiko; Ihmig, Frank R.

    2009-11-01

    We describe a polymorphic electronic infrastructure for managing biological samples stored over liquid nitrogen. As part of this system we have developed new cryocontainers and carrier plates attached to Flash memory chips to have a redundant and portable set of data at each sample. Our experimental investigations show that basic Flash operation and endurance is adequate for the application down to liquid nitrogen temperatures. This identification technology can provide the best sample identification, documentation and tracking that brings added value to each sample. The first application of the system is in a worldwide collaborative research towards the production of an AIDS vaccine. The functionality and versatility of the system can lead to an essential optimization of sample and data exchange for global clinical studies.

  10. Associations between sociodemographic, sampling and health factors and various salivary cortisol indicators in a large sample without psychopathology

    NARCIS (Netherlands)

    Vreeburg, Sophie A.; Kruijtzer, Boudewijn P.; van Pelt, Johannes; van Dyck, Richard; DeRijk, Roel H.; Hoogendijk, Witte J. G.; Smit, Johannes H.; Zitman, Frans G.; Penninx, Brenda

    Background: Cortisol levels are increasingly often assessed in large-scale psychosomatic research. Although determinants of different salivary cortisol indicators have been described, they have not yet been systematically studied within the same study with a Large sample size. Sociodemographic,

  11. Heritability of psoriasis in a large twin sample

    DEFF Research Database (Denmark)

    Lønnberg, Ann Sophie; Skov, Liselotte; Skytthe, A

    2013-01-01

    AIM: To study the concordance of psoriasis in a population-based twin sample. METHODS: Data on psoriasis in 10,725 twin pairs, 20-71 years of age, from the Danish Twin Registry was collected via a questionnaire survey. The concordance and heritability of psoriasis were estimated. RESULTS: In total...

  12. Hierarchical Cluster Analysis of Three-Dimensional Reconstructions of Unbiased Sampled Microglia Shows not Continuous Morphological Changes from Stage 1 to 2 after Multiple Dengue Infections in Callithrix penicillata

    Science.gov (United States)

    Diniz, Daniel G.; Silva, Geane O.; Naves, Thaís B.; Fernandes, Taiany N.; Araújo, Sanderson C.; Diniz, José A. P.; de Farias, Luis H. S.; Sosthenes, Marcia C. K.; Diniz, Cristovam G.; Anthony, Daniel C.; da Costa Vasconcelos, Pedro F.; Picanço Diniz, Cristovam W.

    2016-01-01

    It is known that microglial morphology and function are related, but few studies have explored the subtleties of microglial morphological changes in response to specific pathogens. In the present report we quantitated microglia morphological changes in a monkey model of dengue disease with virus CNS invasion. To mimic multiple infections that usually occur in endemic areas, where higher dengue infection incidence and abundant mosquito vectors carrying different serotypes coexist, subjects received once a week subcutaneous injections of DENV3 (genotype III)-infected culture supernatant followed 24 h later by an injection of anti-DENV2 antibody. Control animals received either weekly anti-DENV2 antibodies, or no injections. Brain sections were immunolabeled for DENV3 antigens and IBA-1. Random and systematic microglial samples were taken from the polymorphic layer of dentate gyrus for 3-D reconstructions, where we found intense immunostaining for TNFα and DENV3 virus antigens. We submitted all bi- or multimodal morphological parameters of microglia to hierarchical cluster analysis and found two major morphological phenotypes designated types I and II. Compared to type I (stage 1), type II microglia were more complex; displaying higher number of nodes, processes and trees and larger surface area and volumes (stage 2). Type II microglia were found only in infected monkeys, whereas type I microglia was found in both control and infected subjects. Hierarchical cluster analysis of morphological parameters of 3-D reconstructions of random and systematic selected samples in control and ADE dengue infected monkeys suggests that microglia morphological changes from stage 1 to stage 2 may not be continuous. PMID:27047345

  13. Genetic Influences on Pulmonary Function: A Large Sample Twin Study

    DEFF Research Database (Denmark)

    Ingebrigtsen, Truls S; Thomsen, Simon F; van der Sluis, Sophie

    2011-01-01

    Heritability of forced expiratory volume in one second (FEV(1)), forced vital capacity (FVC), and peak expiratory flow (PEF) has not been previously addressed in large twin studies. We evaluated the genetic contribution to individual differences observed in FEV(1), FVC, and PEF using data from...... the largest population-based twin study on spirometry. Specially trained lay interviewers with previous experience in spirometric measurements tested 4,314 Danish twins (individuals), 46-68 years of age, in their homes using a hand-held spirometer, and their flow-volume curves were evaluated. Modern variance...

  14. Scanning tunneling spectroscopy under large current flow through the sample.

    Science.gov (United States)

    Maldonado, A; Guillamón, I; Suderow, H; Vieira, S

    2011-07-01

    We describe a method to make scanning tunneling microscopy/spectroscopy imaging at very low temperatures while driving a constant electric current up to some tens of mA through the sample. It gives a new local probe, which we term current driven scanning tunneling microscopy/spectroscopy. We show spectroscopic and topographic measurements under the application of a current in superconducting Al and NbSe(2) at 100 mK. Perspective of applications of this local imaging method includes local vortex motion experiments, and Doppler shift local density of states studies.

  15. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  16. Personalized recommendation based on unbiased consistence

    Science.gov (United States)

    Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao

    2015-08-01

    Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.

  17. Black-Box Search by Unbiased Variation

    DEFF Research Database (Denmark)

    Lehre, Per Kristian; Witt, Carsten

    2012-01-01

    The complexity theory for black-box algorithms, introduced by Droste, Jansen, and Wegener (Theory Comput. Syst. 39:525–544, 2006), describes common limits on the efficiency of a broad class of randomised search heuristics. There is an obvious trade-off between the generality of the black-box model...... and the strength of the bounds that can be proven in such a model. In particular, the original black-box model provides for well-known benchmark problems relatively small lower bounds, which seem unrealistic in certain cases and are typically not met by popular search heuristics.In this paper, we introduce a more...... restricted black-box model for optimisation of pseudo-Boolean functions which we claim captures the working principles of many randomised search heuristics including simulated annealing, evolutionary algorithms, randomised local search, and others. The key concept worked out is an unbiased variation operator...

  18. Unbiased stereologic techniques for practical use in diagnostic histopathology

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1995-01-01

    Grading of malignancy by the examination of morphologic and cytologic details in histologic sections from malignant neoplasms is based exclusively on qualitative features, associated with significant subjectivity, and thus rather poor reproducibility. The traditional way of malignancy grading may...... by introducing quantitative techniques in the histopathologic discipline of malignancy grading. Unbiased stereologic methods, especially based on measurements of nuclear three-dimensional mean size, have during the last decade proved their value in this regard. In this survey, the methods are reviewed regarding...... the basic technique involved, sampling, efficiency, and reproducibility. Various types of cancers, where stereologic grading of malignancy has been used, are reviewed and discussed with regard to the development of a new objective and reproducible basis for carrying out prognosis-related malignancy grading...

  19. Field sampling, preparation procedure and plutonium analyses of large freshwater samples

    International Nuclear Information System (INIS)

    Straelberg, E.; Bjerk, T.O.; Oestmo, K.; Brittain, J.E.

    2002-01-01

    This work is part of an investigation of the mobility of plutonium in freshwater systems containing humic substances. A well-defined bog-stream system located in the catchment area of a subalpine lake, Oevre Heimdalsvatn, Norway, is being studied. During the summer of 1999, six water samples were collected from the tributary stream Lektorbekken and the lake itself. However, the analyses showed that the plutonium concentration was below the detection limit in all the samples. Therefore renewed sampling at the same sites was carried out in August 2000. The results so far are in agreement with previous analyses from the Heimdalen area. However, 100 times higher concentrations are found in the lowlands in the eastern part of Norway. The reason for this is not understood, but may be caused by differences in the concentrations of humic substances and/or the fact that the mountain areas are covered with snow for a longer period of time every year. (LN)

  20. Mutually unbiased bases and semi-definite programming

    Energy Technology Data Exchange (ETDEWEB)

    Brierley, Stephen; Weigert, Stefan, E-mail: steve.brierley@ulb.ac.be, E-mail: stefan.weigert@york.ac.uk

    2010-11-01

    A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Groebner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.

  1. Mutually unbiased bases and semi-definite programming

    International Nuclear Information System (INIS)

    Brierley, Stephen; Weigert, Stefan

    2010-01-01

    A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Groebner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.

  2. Sample preparation method for ICP-MS measurement of 99Tc in a large amount of environmental samples

    International Nuclear Information System (INIS)

    Kondo, M.; Seki, R.

    2002-01-01

    Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

  3. 105-DR Large sodium fire facility soil sampling data evaluation report

    International Nuclear Information System (INIS)

    Adler, J.G.

    1996-01-01

    This report evaluates the soil sampling activities, soil sample analysis, and soil sample data associated with the closure activities at the 105-DR Large Sodium Fire Facility. The evaluation compares these activities to the regulatory requirements for meeting clean closure. The report concludes that there is no soil contamination from the waste treatment activities

  4. Unbiased contaminant removal for 3D galaxy power spectrum measurements

    Science.gov (United States)

    Kalus, B.; Percival, W. J.; Bacon, D. J.; Samushia, L.

    2016-11-01

    We assess and develop techniques to remove contaminants when calculating the 3D galaxy power spectrum. We separate the process into three separate stages: (I) removing the contaminant signal, (II) estimating the uncontaminated cosmological power spectrum and (III) debiasing the resulting estimates. For (I), we show that removing the best-fitting contaminant (mode subtraction) and setting the contaminated components of the covariance to be infinite (mode deprojection) are mathematically equivalent. For (II), performing a quadratic maximum likelihood (QML) estimate after mode deprojection gives an optimal unbiased solution, although it requires the manipulation of large N_mode^2 matrices (Nmode being the total number of modes), which is unfeasible for recent 3D galaxy surveys. Measuring a binned average of the modes for (II) as proposed by Feldman, Kaiser & Peacock (FKP) is faster and simpler, but is sub-optimal and gives rise to a biased solution. We present a method to debias the resulting FKP measurements that does not require any large matrix calculations. We argue that the sub-optimality of the FKP estimator compared with the QML estimator, caused by contaminants, is less severe than that commonly ignored due to the survey window.

  5. Quantifying high dimensional entanglement with two mutually unbiased bases

    Directory of Open Access Journals (Sweden)

    Paul Erker

    2017-07-01

    Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.

  6. Quantum process reconstruction based on mutually unbiased basis

    International Nuclear Information System (INIS)

    Fernandez-Perez, A.; Saavedra, C.; Klimov, A. B.

    2011-01-01

    We study a quantum process reconstruction based on the use of mutually unbiased projectors (MUB projectors) as input states for a D-dimensional quantum system, with D being a power of a prime number. This approach connects the results of quantum-state tomography using mutually unbiased bases with the coefficients of a quantum process, expanded in terms of MUB projectors. We also study the performance of the reconstruction scheme against random errors when measuring probabilities at the MUB projectors.

  7. Large Sample Neutron Activation Analysis: A Challenge in Cultural Heritage Studies

    International Nuclear Information System (INIS)

    Stamatelatos, I.E.; Tzika, F.

    2007-01-01

    Large sample neutron activation analysis compliments and significantly extends the analytical tools available for cultural heritage and authentication studies providing unique applications of non-destructive, multi-element analysis of materials that are too precious to damage for sampling purposes, representative sampling of heterogeneous materials or even analysis of whole objects. In this work, correction factors for neutron self-shielding, gamma-ray attenuation and volume distribution of the activity in large volume samples composed of iron and ceramic material were derived. Moreover, the effect of inhomogeneity on the accuracy of the technique was examined

  8. Revisiting AFLP fingerprinting for an unbiased assessment of genetic structure and differentiation of taurine and zebu cattle

    Science.gov (United States)

    2014-01-01

    Background Descendants from the extinct aurochs (Bos primigenius), taurine (Bos taurus) and zebu cattle (Bos indicus) were domesticated 10,000 years ago in Southwestern and Southern Asia, respectively, and colonized the world undergoing complex events of admixture and selection. Molecular data, in particular genome-wide single nucleotide polymorphism (SNP) markers, can complement historic and archaeological records to elucidate these past events. However, SNP ascertainment in cattle has been optimized for taurine breeds, imposing limitations to the study of diversity in zebu cattle. As amplified fragment length polymorphism (AFLP) markers are discovered and genotyped as the samples are assayed, this type of marker is free of ascertainment bias. In order to obtain unbiased assessments of genetic differentiation and structure in taurine and zebu cattle, we analyzed a dataset of 135 AFLP markers in 1,593 samples from 13 zebu and 58 taurine breeds, representing nine continental areas. Results We found a geographical pattern of expected heterozygosity in European taurine breeds decreasing with the distance from the domestication centre, arguing against a large-scale introgression from European or African aurochs. Zebu cattle were found to be at least as diverse as taurine cattle. Western African zebu cattle were found to have diverged more from Indian zebu than South American zebu. Model-based clustering and ancestry informative markers analyses suggested that this is due to taurine introgression. Although a large part of South American zebu cattle also descend from taurine cows, we did not detect significant levels of taurine ancestry in these breeds, probably because of systematic backcrossing with zebu bulls. Furthermore, limited zebu introgression was found in Podolian taurine breeds in Italy. Conclusions The assessment of cattle diversity reported here contributes an unbiased global view to genetic differentiation and structure of taurine and zebu cattle

  9. Radioimmunoassay of h-TSH - methodological suggestions for dealing with medium to large numbers of samples

    International Nuclear Information System (INIS)

    Mahlstedt, J.

    1977-01-01

    The article deals with practical aspects of establishing a TSH-RIA for patients, with particular regard to predetermined quality criteria. Methodological suggestions are made for medium to large numbers of samples with the target of reducing monotonous precision working steps by means of simple aids. The quality criteria required are well met, while the test procedure is well adapted to the rhythm of work and may be carried out without loss of precision even with large numbers of samples. (orig.) [de

  10. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  11. Absolute activity determinations on large volume geological samples independent of self-absorption effects

    International Nuclear Information System (INIS)

    Wilson, W.E.

    1980-01-01

    This paper describes a method for measuring the absolute activity of large volume samples by γ-spectroscopy independent of self-absorption effects using Ge detectors. The method yields accurate matrix independent results at the expense of replicative counting of the unknown sample. (orig./HP)

  12. Utilization of AHWR critical facility for research and development work on large sample NAA

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Reddy, A.V.R.; Verma, S.K.; De, S.K.

    2014-01-01

    The graphite reflector position of AHWR critical facility (CF) was utilized for analysis of large size (g-kg scale) samples using internal mono standard neutron activation analysis (IM-NAA). The reactor position was characterized by cadmium ratio method using In monitor for total flux and sub cadmium to epithermal flux ratio (f). Large sample neutron activation analysis (LSNAA) work was carried out for samples of stainless steel, ancient and new clay potteries and dross. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated. Radioactive assay was carried out using high resolution gamma ray spectrometry. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. Concentrations of Au and Ag were determined in not so homogeneous three large size samples of dross. An X-Z rotary scanning unit has been installed for counting large and not so homogeneous samples. (author)

  13. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  14. 105-DR Large Sodium Fire Facility decontamination, sampling, and analysis plan

    International Nuclear Information System (INIS)

    Knaus, Z.C.

    1995-01-01

    This is the decontamination, sampling, and analysis plan for the closure activities at the 105-DR Large Sodium Fire Facility at Hanford Reservation. This document supports the 105-DR Large Sodium Fire Facility Closure Plan, DOE-RL-90-25. The 105-DR LSFF, which operated from about 1972 to 1986, was a research laboratory that occupied the former ventilation supply room on the southwest side of the 105-DR Reactor facility in the 100-D Area of the Hanford Site. The LSFF was established to investigate fire fighting and safety associated with alkali metal fires in the liquid metal fast breeder reactor facilities. The decontamination, sampling, and analysis plan identifies the decontamination procedures, sampling locations, any special handling requirements, quality control samples, required chemical analysis, and data validation needed to meet the requirements of the 105-DR Large Sodium Fire Facility Closure Plan in compliance with the Resource Conservation and Recovery Act

  15. Evaluation of environmental sampling methods for detection of Salmonella enterica in a large animal veterinary hospital.

    Science.gov (United States)

    Goeman, Valerie R; Tinkler, Stacy H; Hammac, G Kenitra; Ruple, Audrey

    2018-04-01

    Environmental surveillance for Salmonella enterica can be used for early detection of contamination; thus routine sampling is an integral component of infection control programs in hospital environments. At the Purdue University Veterinary Teaching Hospital (PUVTH), the technique regularly employed in the large animal hospital for sample collection uses sterile gauze sponges for environmental sampling, which has proven labor-intensive and time-consuming. Alternative sampling methods use Swiffer brand electrostatic wipes for environmental sample collection, which are reportedly effective and efficient. It was hypothesized that use of Swiffer wipes for sample collection would be more efficient and less costly than the use of gauze sponges. A head-to-head comparison between the 2 sampling methods was conducted in the PUVTH large animal hospital and relative agreement, cost-effectiveness, and sampling efficiency were compared. There was fair agreement in culture results between the 2 sampling methods, but Swiffer wipes required less time and less physical effort to collect samples and were more cost-effective.

  16. An open-flow pulse ionization chamber for alpha spectrometry of large-area samples

    International Nuclear Information System (INIS)

    Johansson, L.; Roos, B.; Samuelsson, C.

    1992-01-01

    The presented open-flow pulse ionization chamber was developed to make alpha spectrometry on large-area surfaces easy. One side of the chamber is left open, where the sample is to be placed. The sample acts as a chamber wall and therby defeins the detector volume. The sample area can be as large as 400 cm 2 . To prevent air from entering the volume there is a constant gas flow through the detector, coming in at the bottom of the chamber and leaking at the sides of the sample. The method results in good energy resolution and has considerable applicability in the retrospective radon research. Alpha spectra obtained in the retrospective measurements descend from 210 Po, built up in the sample from the radon daughters recoiled into a glass surface. (au)

  17. Unbiased metal oxide semiconductor ionising radiation dosemeter

    International Nuclear Information System (INIS)

    Kumurdjian, N.; Sarrabayrouse, G.J.

    1995-01-01

    To assess the application of MOS devices as low dose rate dosemeters, the sensitivity is the major factor although little studies have been performed on that subject. It is studied here, as well as thermal stability and linearity of the response curve. Other advantages are specified such as large measurable dose range, low cost, small size, possibility of integration. (D.L.)

  18. Unbiased diffusion of Brownian particles on disordered correlated potentials

    International Nuclear Information System (INIS)

    Salgado-Garcia, Raúl; Maldonado, Cesar

    2015-01-01

    In this work we study the diffusion of non-interacting overdamped particles, moving on unbiased disordered correlated potentials, subjected to Gaussian white noise. We obtain an exact expression for the diffusion coefficient which allows us to prove that the unbiased diffusion of overdamped particles on a random polymer does not depend on the correlations of the disordered potentials. This universal behavior of the unbiased diffusivity is a direct consequence of the validity of the Einstein relation and the decay of correlations of the random polymer. We test the independence on correlations of the diffusion coefficient for correlated polymers produced by two different stochastic processes, a one-step Markov chain and the expansion-modification system. Within the accuracy of our simulations, we found that the numerically obtained diffusion coefficient for these systems agree with the analytically calculated ones, confirming our predictions. (paper)

  19. Relationship of fish indices with sampling effort and land use change in a large Mediterranean river.

    Science.gov (United States)

    Almeida, David; Alcaraz-Hernández, Juan Diego; Merciai, Roberto; Benejam, Lluís; García-Berthou, Emili

    2017-12-15

    Fish are invaluable ecological indicators in freshwater ecosystems but have been less used for ecological assessments in large Mediterranean rivers. We evaluated the effects of sampling effort (transect length) on fish metrics, such as species richness and two fish indices (the new European Fish Index EFI+ and a regional index, IBICAT2b), in the mainstem of a large Mediterranean river. For this purpose, we sampled by boat electrofishing five sites each with 10 consecutive transects corresponding to a total length of 20 times the river width (European standard required by the Water Framework Directive) and we also analysed the effect of sampling area on previous surveys. Species accumulation curves and richness extrapolation estimates in general suggested that species richness was reasonably estimated with transect lengths of 10 times the river width or less. The EFI+ index was significantly affected by sampling area, both for our samplings and previous data. Surprisingly, EFI+ values in general decreased with increasing sampling area, despite the higher observed richness, likely because the expected values of metrics were higher. By contrast, the regional fish index was not dependent on sampling area, likely because it does not use a predictive model. Both fish indices, but particularly the EFI+, decreased with less forest cover percentage, even within the smaller disturbance gradient in the river type studied (mainstem of a large Mediterranean river, where environmental pressures are more general). Although the two fish-based indices are very different in terms of their development, methodology, and metrics used, they were significantly correlated and provided a similar assessment of ecological status. Our results reinforce the importance of standardization of sampling methods for bioassessment and suggest that predictive models that use sampling area as a predictor might be more affected by differences in sampling effort than simpler biotic indices. Copyright

  20. Rapid separation method for {sup 237}Np and Pu isotopes in large soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Maxwell, Sherrod L., E-mail: sherrod.maxwell@srs.go [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States); Culligan, Brian K.; Noyes, Gary W. [Savannah River Nuclear Solutions, LLC, Building 735-B, Aiken, SC 29808 (United States)

    2011-07-15

    A new rapid method for the determination of {sup 237}Np and Pu isotopes in soil and sediment samples has been developed at the Savannah River Site Environmental Lab (Aiken, SC, USA) that can be used for large soil samples. The new soil method utilizes an acid leaching method, iron/titanium hydroxide precipitation, a lanthanum fluoride soil matrix removal step, and a rapid column separation process with TEVA Resin. The large soil matrix is removed easily and rapidly using these two simple precipitations with high chemical recoveries and effective removal of interferences. Vacuum box technology and rapid flow rates are used to reduce analytical time.

  1. The problem of large samples. An activation analysis study of electronic waste material

    International Nuclear Information System (INIS)

    Segebade, C.; Goerner, W.; Bode, P.

    2007-01-01

    Large-volume instrumental photon activation analysis (IPAA) was used for the investigation of shredded electronic waste material. Sample masses from 1 to 150 grams were analyzed to obtain an estimate of the minimum sample size to be taken to achieve a representativeness of the results which is satisfactory for a defined investigation task. Furthermore, the influence of irradiation and measurement parameters upon the quality of the analytical results were studied. Finally, the analytical data obtained from IPAA and instrumental neutron activation analysis (INAA), both carried out in a large-volume mode, were compared. Only parts of the values were found in satisfactory agreement. (author)

  2. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  3. Unbiased quantitative testing of conventional orthodontic beliefs.

    Science.gov (United States)

    Baumrind, S

    1998-03-01

    This study used a preexisting database to test in hypothesis from the appropriateness of some common orthodontic beliefs concerning upper first molar displacement and changes in facial morphology associated with conventional full bonded/banded treatment in growing subjects. In an initial pass, the author used data from a stratified random sample of 48 subjects drawn retrospectively from the practice of a single, experienced orthodontist. This sample consisted of 4 subgroups of 12 subjects each: Class I nonextraction, Class I extraction, Class II nonextraction, and Class II extraction. The findings indicate that, relative to the facial profile, chin point did not, on average, displace anteriorly during treatment, either overall or in any subgroup. Relative to the facial profile, Point A became significantly less prominent during treatment, both overall and in each subgroup. The best estimate of the mean displacement of the upper molar cusp relative to superimposition on Anterior Cranial Base was in the mesial direction in each of the four subgroups. In only one extraction subject out of 24 did the cusp appear to be displaced distally. Mesial molar cusp displacement was significantly greater in the Class II extraction subgroup than in the Class II nonextraction subgroup. Relative to superimposition on anatomical "best fit" of maxillary structures, the findings for molar cusp displacement were similar, but even more dramatic. Mean mesial migration was highly significant in both the Class II nonextraction and Class II extraction subgroups. In no subject in the entire sample was distal displacement noted relative to this superimposition. Mean increase in anterior Total Face Height was significantly greater in the Class II extraction subgroup than in the Class II nonextraction subgroup. (This finding was contrary to the author's original expectation.) The generalizability of the findings from the initial pass to other treated growing subjects was then assessed by

  4. Procedure for plutonium analysis of large (100g) soil and sediment samples

    International Nuclear Information System (INIS)

    Meadows, J.W.T.; Schweiger, J.S.; Mendoza, B.; Stone, R.

    1975-01-01

    A method for the complete dissolution of large soil or sediment samples is described. This method is in routine usage at Lawrence Livermore Laboratory for the analysis of fall-out levels of Pu in soils and sediments. Intercomparison with partial dissolution (leach) techniques shows the complete dissolution method to be superior for the determination of plutonium in a wide variety of environmental samples. (author)

  5. UNBIASED INCLINATION DISTRIBUTIONS FOR OBJECTS IN THE KUIPER BELT

    International Nuclear Information System (INIS)

    Gulbis, A. A. S.; Elliot, J. L.; Adams, E. R.; Benecchi, S. D.; Buie, M. W.; Trilling, D. E.; Wasserman, L. H.

    2010-01-01

    Using data from the Deep Ecliptic Survey (DES), we investigate the inclination distributions of objects in the Kuiper Belt. We present a derivation for observational bias removal and use this procedure to generate unbiased inclination distributions for Kuiper Belt objects (KBOs) of different DES dynamical classes, with respect to the Kuiper Belt plane. Consistent with previous results, we find that the inclination distribution for all DES KBOs is well fit by the sum of two Gaussians, or a Gaussian plus a generalized Lorentzian, multiplied by sin i. Approximately 80% of KBOs are in the high-inclination grouping. We find that Classical object inclinations are well fit by sin i multiplied by the sum of two Gaussians, with roughly even distribution between Gaussians of widths 2.0 +0.6 -0.5 0 and 8.1 +2.6 -2.1 0 . Objects in different resonances exhibit different inclination distributions. The inclinations of Scattered objects are best matched by sin i multiplied by a single Gaussian that is centered at 19.1 +3.9 -3.6 0 with a width of 6.9 +4.1 -2.7 0 . Centaur inclinations peak just below 20 0 , with one exceptionally high-inclination object near 80 0 . The currently observed inclination distribution of the Centaurs is not dissimilar to that of the Scattered Extended KBOs and Jupiter-family comets, but is significantly different from the Classical and Resonant KBOs. While the sample sizes of some dynamical classes are still small, these results should begin to serve as a critical diagnostic for models of solar system evolution.

  6. Unbiased roughness measurements: the key to better etch performance

    Science.gov (United States)

    Liang, Andrew; Mack, Chris; Sirard, Stephen; Liang, Chen-wei; Yang, Liu; Jiang, Justin; Shamma, Nader; Wise, Rich; Yu, Jengyi; Hymes, Diane

    2018-03-01

    Edge placement error (EPE) has become an increasingly critical metric to enable Moore's Law scaling. Stochastic variations, as characterized for lines by line width roughness (LWR) and line edge roughness (LER), are dominant factors in EPE and known to increase with the introduction of EUV lithography. However, despite recommendations from ITRS, NIST, and SEMI standards, the industry has not agreed upon a methodology to quantify these properties. Thus, differing methodologies applied to the same image often result in different roughness measurements and conclusions. To standardize LWR and LER measurements, Fractilia has developed an unbiased measurement that uses a raw unfiltered line scan to subtract out image noise and distortions. By using Fractilia's inverse linescan model (FILM) to guide development, we will highlight the key influences of roughness metrology on plasma-based resist smoothing processes. Test wafers were deposited to represent a 5 nm node EUV logic stack. The patterning stack consists of a core Si target layer with spin-on carbon (SOC) as the hardmask and spin-on glass (SOG) as the cap. Next, these wafers were exposed through an ASML NXE 3350B EUV scanner with an advanced chemically amplified resist (CAR). Afterwards, these wafers were etched through a variety of plasma-based resist smoothing techniques using a Lam Kiyo conductor etch system. Dense line and space patterns on the etched samples were imaged through advanced Hitachi CDSEMs and the LER and LWR were measured through both Fractilia and an industry standard roughness measurement software. By employing Fractilia to guide plasma-based etch development, we demonstrate that Fractilia produces accurate roughness measurements on resist in contrast to an industry standard measurement software. These results highlight the importance of subtracting out SEM image noise to obtain quicker developmental cycle times and lower target layer roughness.

  7. An unbiased stereological method for efficiently quantifying the innervation of the heart and other organs based on total length estimations

    DEFF Research Database (Denmark)

    Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela

    2010-01-01

    Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...

  8. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  9. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kasviki, K. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece); Stamatelatos, I.E. [Institute of Nuclear Technology and Radiation Protection, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece)], E-mail: ion@ipta.demokritos.gr; Yannakopoulou, E. [Institute of Physical Chemistry, NCSR ' Demokritos' , Aghia Paraskevi, Attikis 15310 (Greece); Papadopoulou, P. [Institute of Technology of Agricultural Products, NAGREF, Lycovrissi, Attikis 14123 (Greece); Kalef-Ezra, J. [Medical Physics Laboratory, Medical School, University of Ioannina, Ioannina 45110 (Greece)

    2007-10-15

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  10. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    International Nuclear Information System (INIS)

    Kasviki, K.; Stamatelatos, I.E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-01-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv

  11. Determination of 129I in large soil samples after alkaline wet disintegration

    International Nuclear Information System (INIS)

    Bunzl, K.; Kracke, W.

    1992-01-01

    Large soil samples (up to 500 g) can conveniently be disintegrated by hydrogen peroxide in an utility tank under alkaline conditions to determine subsequently 129 I by neutron activation analysis. Interfering elements such as Br are removed already before neutron irradiation to reduce the radiation exposure of the personnel. The precision of the method is 129 I also by the combustion method. (orig.)

  12. 17 CFR Appendix B to Part 420 - Sample Large Position Report

    Science.gov (United States)

    2010-04-01

    ..., and as collateral for financial derivatives and other securities transactions $ Total Memorandum 1... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Sample Large Position Report B Appendix B to Part 420 Commodity and Securities Exchanges DEPARTMENT OF THE TREASURY REGULATIONS UNDER...

  13. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  14. Investigating sex differences in psychological predictors of snack intake among a large representative sample

    NARCIS (Netherlands)

    Adriaanse, M.A.; Evers, C.; Verhoeven, A.A.C.; de Ridder, D.T.D.

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of

  15. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  16. Is Business Failure Due to Lack of Effort? Empirical Evidence from a Large Administrative Sample

    NARCIS (Netherlands)

    Ejrnaes, M.; Hochguertel, S.

    2013-01-01

    Does insurance provision reduce entrepreneurs' effort to avoid business failure? We exploit unique features of the voluntary Danish unemployment insurance (UI) scheme, that is available to the self-employed. Using a large sample of self-employed individuals, we estimate the causal effect of

  17. Determinants of salivary evening alpha-amylase in a large sample free of psychopathology

    NARCIS (Netherlands)

    Veen, Gerthe; Giltay, Erik J.; Vreeburg, Sophie A.; Licht, Carmilla M. M.; Cobbaert, Christa M.; Zitman, Frans G.; Penninx, Brenda W. J. H.

    Objective: Recently, salivary alpha-amylase (sAA) has been proposed as a suitable index for sympathetic activity and dysregulation of the autonomic nervous system (ANS). Although determinants of sAA have been described, they have not been studied within the same study with a large sample size

  18. Psychometric Properties of the Penn State Worry Questionnaire for Children in a Large Clinical Sample

    Science.gov (United States)

    Pestle, Sarah L.; Chorpita, Bruce F.; Schiffman, Jason

    2008-01-01

    The Penn State Worry Questionnaire for Children (PSWQ-C; Chorpita, Tracey, Brown, Collica, & Barlow, 1997) is a 14-item self-report measure of worry in children and adolescents. Although the PSWQ-C has demonstrated favorable psychometric properties in small clinical and large community samples, this study represents the first psychometric…

  19. Experimental studies of unbiased gluon jets from $e^{+}e^{-}$ annihilations using the jet boost algorithm

    CERN Document Server

    Abbiendi, G.; Akesson, P.F.; Alexander, G.; Allison, John; Amaral, P.; Anagnostou, G.; Anderson, K.J.; Arcelli, S.; Asai, S.; Axen, D.; Azuelos, G.; Bailey, I.; Barberio, E.; Barlow, R.J.; Batley, R.J.; Bechtle, P.; Behnke, T.; Bell, Kenneth Watson; Bell, P.J.; Bella, G.; Bellerive, A.; Benelli, G.; Bethke, S.; Biebel, O.; Boeriu, O.; Bock, P.; Boutemeur, M.; Braibant, S.; Brigliadori, L.; Brown, Robert M.; Buesser, K.; Burckhart, H.J.; Campana, S.; Carnegie, R.K.; Caron, B.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Csilling, A.; Cuffiani, M.; Dado, S.; De Roeck, A.; De Wolf, E.A.; Desch, K.; Dienes, B.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Etzion, E.; Fabbri, F.; Feld, L.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Furtjes, A.; Gagnon, P.; Gary, John William; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Giunta, Marina; Goldberg, J.; Gross, E.; Grunhaus, J.; Gruwe, M.; Gunther, P.O.; Gupta, A.; Hajdu, C.; Hamann, M.; Hanson, G.G.; Harder, K.; Harel, A.; Harin-Dirac, M.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Hensel, C.; Herten, G.; Heuer, R.D.; Hill, J.C.; Hoffman, Kara Dion; Horvath, D.; Igo-Kemenes, P.; Ishii, K.; Jeremie, H.; Jovanovic, P.; Junk, T.R.; Kanaya, N.; Kanzaki, J.; Karapetian, G.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klein, K.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Komamiya, S.; Kormos, Laura L.; Kramer, T.; Krieger, P.; von Krogh, J.; Kruger, K.; Kuhl, T.; Kupper, M.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Layter, J.G.; Leins, A.; Lellouch, D.; Letts, J.; Levinson, L.; Lillich, J.; Lloyd, S.L.; Loebinger, F.K.; Lu, J.; Ludwig, J.; Macpherson, A.; Mader, W.; Marcellini, S.; Martin, A.J.; Masetti, G.; Mashimo, T.; Mattig, Peter; McDonald, W.J.; McKenna, J.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menges, W.; Merritt, F.S.; Mes, H.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Moed, S.; Mohr, W.; Mori, T.; Mutter, A.; Nagai, K.; Nakamura, I.; Nanjo, H.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oh, A.; Okpara, A.; Oreglia, M.J.; Orito, S.; Pahl, C.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poli, B.; Polok, J.; Pooth, O.; Przybycien, M.; Quadt, A.; Rabbertz, K.; Rembser, C.; Renkel, P.; Rick, H.; Roney, J.M.; Rosati, S.; Rozen, Y.; Runge, K.; Sachs, K.; Saeki, T.; Sarkisyan, E.K.G.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schoerner-Sadenius, Thomas; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Sherwood, P.; Siroli, G.; Skuja, A.; Smith, A.M.; Sobie, R.; Soldner-Rembold, S.; Spano, F.; Stahl, A.; Stephens, K.; Strom, David M.; Strohmer, R.; Tarem, S.; Tasevsky, M.; Taylor, R.J.; Teuscher, R.; Thomson, M.A.; Torrence, E.; Toya, D.; Tran, P.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turner-Watson, M.F.; Ueda, I.; Ujvari, B.; Vollmer, C.F.; Vannerem, P.; Vertesi, R.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Waller, D.; Ward, C.P.; Ward, D.R.; Warsinsky, M.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wengler, T.; Wermes, N.; Wetterling, D.; Wilson, G.W.; Wilson, J.A.; Wolf, G.; Wyatt, T.R.; Yamashita, S.; Zer-Zion, D.; Zivkovic, Lidija

    2004-01-01

    We present the first experimental results based on the jet boost algorithm, a technique to select unbiased samples of gluon jets in e+e- annihilations, i.e. gluon jets free of biases introduced by event selection or jet finding criteria. Our results are derived from hadronic Z0 decays observed with the OPAL detector at the LEP e+e- collider at CERN. First, we test the boost algorithm through studies with Herwig Monte Carlo events and find that it provides accurate measurements of the charged particle multiplicity distributions of unbiased gluon jets for jet energies larger than about 5 GeV, and of the jet particle energy spectra (fragmentation functions) for jet energies larger than about 14 GeV. Second, we apply the boost algorithm to our data to derive unbiased measurements of the gluon jet multiplicity distribution for energies between about 5 and 18 GeV, and of the gluon jet fragmentation function at 14 and 18 GeV. In conjunction with our earlier results at 40 GeV, we then test QCD calculations for the en...

  20. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    Science.gov (United States)

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  1. Feasibility studies on large sample neutron activation analysis using a low power research reactor

    International Nuclear Information System (INIS)

    Gyampo, O.

    2008-06-01

    Instrumental neutron activation analysis (INAA) using Ghana Research Reactor-1 (GHARR-1) can be directly applied to samples with masses in grams. Samples weights were in the range of 0.5g to 5g. Therefore, the representativity of the sample is improved as well as sensitivity. Irradiation of samples was done using a low power research reactor. The correction for the neutron self-shielding within the sample is determined from measurement of the neutron flux depression just outside the sample. Correction for gamma ray self-attenuation in the sample was performed via linear attenuation coefficients derived from transmission measurements. Quantitative and qualitative analysis of data were done using gamma ray spectrometry (HPGe detector). The results of this study on the possibilities of large sample NAA using a miniature neutron source reactor (MNSR) show clearly that the Ghana Research Reactor-1 (GHARR-1) at the National Nuclear Research Institute (NNRI) can be used for sample analyses up to 5 grams (5g) using the pneumatic transfer systems.

  2. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  3. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  4. Fast concentration of dissolved forms of cesium radioisotopes from large seawater samples

    International Nuclear Information System (INIS)

    Jan Kamenik; Henrieta Dulaiova; Ferdinand Sebesta; Kamila St'astna; Czech Technical University, Prague

    2013-01-01

    The method developed for cesium concentration from large freshwater samples was tested and adapted for analysis of cesium radionuclides in seawater. Concentration of dissolved forms of cesium in large seawater samples (about 100 L) was performed using composite absorbers AMP-PAN and KNiFC-PAN with ammonium molybdophosphate and potassium–nickel hexacyanoferrate(II) as active components, respectively, and polyacrylonitrile as a binding polymer. A specially designed chromatography column with bed volume (BV) 25 mL allowed fast flow rates of seawater (up to 1,200 BV h -1 ). The recovery yields were determined by ICP-MS analysis of stable cesium added to seawater sample. Both absorbers proved usability for cesium concentration from large seawater samples. KNiFC-PAN material was slightly more effective in cesium concentration from acidified seawater (recovery yield around 93 % for 700 BV h -1 ). This material showed similar efficiency in cesium concentration also from natural seawater. The activity concentrations of 137 Cs determined in seawater from the central Pacific Ocean were 1.5 ± 0.1 and 1.4 ± 0.1 Bq m -3 for an offshore (January 2012) and a coastal (February 2012) locality, respectively, 134 Cs activities were below detection limit ( -3 ). (author)

  5. Development of Large Sample Neutron Activation Technique for New Applications in Thailand

    International Nuclear Information System (INIS)

    Laoharojanaphand, S.; Tippayakul, C.; Wonglee, S.; Channuie, J.

    2018-01-01

    The development of the Large Sample Neutron Activation Analysis (LSNAA) in Thailand is presented in this paper. The technique had been firstly developed with rice sample as the test subject. The Thai Research Reactor-1/Modification 1 (TRR-1/M1) was used as the neutron source. The first step was to select and characterize an appropriate irradiation facility for the research. An out-core irradiation facility (A4 position) was first attempted. The results performed with the A4 facility were then used as guides for the subsequent experiments with the thermal column facility. The characterization of the thermal column was performed with Cu-wire to determine spatial distribution without and with rice sample. The flux depression without rice sample was observed to be less than 30% while the flux depression with rice sample increased to within 60%. The flux monitors internal to the rice sample were used to determine average flux over the rice sample. The gamma selfshielding effect during gamma measurement was corrected using the Monte Carlo simulation. The ratio between the efficiencies of the volume source and the point source for each energy point was calculated by the MCNPX code. The research team adopted the k0-NAA methodology to calculate the element concentration in the research. The k0-NAA program which developed by IAEA was set up to simulate the conditions of the irradiation and measurement facilities used in this research. The element concentrations in the bulk rice sample were then calculated taking into account the flux depression and gamma efficiency corrections. At the moment, the results still show large discrepancies with the reference values. However, more research on the validation will be performed to identify sources of errors. Moreover, this LS-NAA technique was introduced for the activation analysis of the IAEA archaeological mock-up. The results are provided in this report. (author)

  6. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    Science.gov (United States)

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  7. Tracing the trajectory of skill learning with a very large sample of online game players.

    Science.gov (United States)

    Stafford, Tom; Dewar, Michael

    2014-02-01

    In the present study, we analyzed data from a very large sample (N = 854,064) of players of an online game involving rapid perception, decision making, and motor responding. Use of game data allowed us to connect, for the first time, rich details of training history with measures of performance from participants engaged for a sustained amount of time in effortful practice. We showed that lawful relations exist between practice amount and subsequent performance, and between practice spacing and subsequent performance. Our methodology allowed an in situ confirmation of results long established in the experimental literature on skill acquisition. Additionally, we showed that greater initial variation in performance is linked to higher subsequent performance, a result we link to the exploration/exploitation trade-off from the computational framework of reinforcement learning. We discuss the benefits and opportunities of behavioral data sets with very large sample sizes and suggest that this approach could be particularly fecund for studies of skill acquisition.

  8. Development of digital gamma-activation autoradiography for analysis of samples of large area

    International Nuclear Information System (INIS)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I.

    2011-01-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  9. A self-sampling method to obtain large volumes of undiluted cervicovaginal secretions.

    Science.gov (United States)

    Boskey, Elizabeth R; Moench, Thomas R; Hees, Paul S; Cone, Richard A

    2003-02-01

    Studies of vaginal physiology and pathophysiology sometime require larger volumes of undiluted cervicovaginal secretions than can be obtained by current methods. A convenient method for self-sampling these secretions outside a clinical setting can facilitate such studies of reproductive health. The goal was to develop a vaginal self-sampling method for collecting large volumes of undiluted cervicovaginal secretions. A menstrual collection device (the Instead cup) was inserted briefly into the vagina to collect secretions that were then retrieved from the cup by centrifugation in a 50-ml conical tube. All 16 women asked to perform this procedure found it feasible and acceptable. Among 27 samples, an average of 0.5 g of secretions (range, 0.1-1.5 g) was collected. This is a rapid and convenient self-sampling method for obtaining relatively large volumes of undiluted cervicovaginal secretions. It should prove suitable for a wide range of assays, including those involving sexually transmitted diseases, microbicides, vaginal physiology, immunology, and pathophysiology.

  10. Development of digital gamma-activation autoradiography for analysis of samples of large area

    Energy Technology Data Exchange (ETDEWEB)

    Kolotov, V.P.; Grozdov, D.S.; Dogadkin, N.N.; Korobkov, V.I. [Russian Academy of Sciences, Moscow (Russian Federation). Vernadsky Inst. of Geochemistry and Analytical Chemistry

    2011-07-01

    Gamma-activation autoradiography is a prospective method for screening detection of inclusions of precious metals in geochemical samples. Its characteristics allow analysis of thin sections of large size (tens of cm2), that favourably distinguishes it among the other methods for local analysis. At the same time, the activating field of the accelerator bremsstrahlung, displays a sharp intensity decrease relative to the distance along the axis. A method for activation dose ''equalization'' during irradiation of the large size thin sections has been developed. The method is based on the usage of a hardware-software system. This includes a device for moving the sample during the irradiation, a program for computer modelling of the acquired activating dose for the chosen kinematics of the sample movement and a program for pixel-by pixel correction of the autoradiographic images. For detection of inclusions of precious metals, a method for analysis of the acquired dose dynamics during sample decay has been developed. The method is based on the software processing pixel by pixel a time-series of coaxial autoradiographic images and generation of the secondary meta-images allowing interpretation regarding the presence of interesting inclusions based on half-lives. The method is tested for analysis of copper-nickel polymetallic ores. The developed solutions considerably expand the possible applications of digital gamma-activation autoradiography. (orig.)

  11. Large area synchrotron X-ray fluorescence mapping of biological samples

    International Nuclear Information System (INIS)

    Kempson, I.; Thierry, B.; Smith, E.; Gao, M.; De Jonge, M.

    2014-01-01

    Large area mapping of inorganic material in biological samples has suffered severely from prohibitively long acquisition times. With the advent of new detector technology we can now generate statistically relevant information for studying cell populations, inter-variability and bioinorganic chemistry in large specimen. We have been implementing ultrafast synchrotron-based XRF mapping afforded by the MAIA detector for large area mapping of biological material. For example, a 2.5 million pixel map can be acquired in 3 hours, compared to a typical synchrotron XRF set-up needing over 1 month of uninterrupted beamtime. Of particular focus to us is the fate of metals and nanoparticles in cells, 3D tissue models and animal tissues. The large area scanning has for the first time provided statistically significant information on sufficiently large numbers of cells to provide data on intercellular variability in uptake of nanoparticles. Techniques such as flow cytometry generally require analysis of thousands of cells for statistically meaningful comparison, due to the large degree of variability. Large area XRF now gives comparable information in a quantifiable manner. Furthermore, we can now image localised deposition of nanoparticles in tissues that would be highly improbable to 'find' by typical XRF imaging. In addition, the ultra fast nature also makes it viable to conduct 3D XRF tomography over large dimensions. This technology avails new opportunities in biomonitoring and understanding metal and nanoparticle fate ex-vivo. Following from this is extension to molecular imaging through specific anti-body targeted nanoparticles to label specific tissues and monitor cellular process or biological consequence

  12. Unbiased stereologic techniques for practical use in diagnostic histopathology

    DEFF Research Database (Denmark)

    Sørensen, Flemming Brandt

    1995-01-01

    by introducing quantitative techniques in the histopathologic discipline of malignancy grading. Unbiased stereologic methods, especially based on measurements of nuclear three-dimensional mean size, have during the last decade proved their value in this regard. In this survey, the methods are reviewed regarding......Grading of malignancy by the examination of morphologic and cytologic details in histologic sections from malignant neoplasms is based exclusively on qualitative features, associated with significant subjectivity, and thus rather poor reproducibility. The traditional way of malignancy grading may...... of solid tumors. This new, unbiased attitude to malignancy grading is associated with excellent virtues, which ultimately may help the clinician in the choice of optimal treatment of the individual patient suffering from cancer. Stereologic methods are not solely applicable to the field of malignancy...

  13. Unbiased estimators for spatial distribution functions of classical fluids

    Science.gov (United States)

    Adib, Artur B.; Jarzynski, Christopher

    2005-01-01

    We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.

  14. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively

  15. Specific Antibodies Reacting with SV40 Large T Antigen Mimotopes in Serum Samples of Healthy Subjects.

    Directory of Open Access Journals (Sweden)

    Mauro Tognon

    Full Text Available Simian Virus 40, experimentally assayed in vitro in different animal and human cells and in vivo in rodents, was classified as a small DNA tumor virus. In previous studies, many groups identified Simian Virus 40 sequences in healthy individuals and cancer patients using PCR techniques, whereas others failed to detect the viral sequences in human specimens. These conflicting results prompted us to develop a novel indirect ELISA with synthetic peptides, mimicking Simian Virus 40 capsid viral protein antigens, named mimotopes. This immunologic assay allowed us to investigate the presence of serum antibodies against Simian Virus 40 and to verify whether Simian Virus 40 is circulating in humans. In this investigation two mimotopes from Simian Virus 40 large T antigen, the viral replication protein and oncoprotein, were employed to analyze for specific reactions to human sera antibodies. This indirect ELISA with synthetic peptides from Simian Virus 40 large T antigen was used to assay a new collection of serum samples from healthy subjects. This novel assay revealed that serum antibodies against Simian Virus 40 large T antigen mimotopes are detectable, at low titer, in healthy subjects aged from 18-65 years old. The overall prevalence of reactivity with the two Simian Virus 40 large T antigen peptides was 20%. This new ELISA with two mimotopes of the early viral regions is able to detect in a specific manner Simian Virus 40 large T antigen-antibody responses.

  16. Higher-dimensional orbital-angular-momentum-based quantum key distribution with mutually unbiased bases

    CSIR Research Space (South Africa)

    Mafu, M

    2013-09-01

    Full Text Available We present an experimental study of higher-dimensional quantum key distribution protocols based on mutually unbiased bases, implemented by means of photons carrying orbital angular momentum. We perform (d + 1) mutually unbiased measurements in a...

  17. CO2 isotope analyses using large air samples collected on intercontinental flights by the CARIBIC Boeing 767

    NARCIS (Netherlands)

    Assonov, S.S.; Brenninkmeijer, C.A.M.; Koeppel, C.; Röckmann, T.

    2009-01-01

    Analytical details for 13C and 18O isotope analyses of atmospheric CO2 in large air samples are given. The large air samples of nominally 300 L were collected during the passenger aircraft-based atmospheric chemistry research project CARIBIC and analyzed for a large number of trace gases and

  18. Presence and significant determinants of cognitive impairment in a large sample of patients with multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Martina Borghi

    Full Text Available OBJECTIVES: To investigate the presence and the nature of cognitive impairment in a large sample of patients with Multiple Sclerosis (MS, and to identify clinical and demographic determinants of cognitive impairment in MS. METHODS: 303 patients with MS and 279 healthy controls were administered the Brief Repeatable Battery of Neuropsychological tests (BRB-N; measures of pre-morbid verbal competence and neuropsychiatric measures were also administered. RESULTS: Patients and healthy controls were matched for age, gender, education and pre-morbid verbal Intelligence Quotient. Patients presenting with cognitive impairment were 108/303 (35.6%. In the overall group of participants, the significant predictors of the most sensitive BRB-N scores were: presence of MS, age, education, and Vocabulary. The significant predictors when considering MS patients only were: course of MS, age, education, vocabulary, and depression. Using logistic regression analyses, significant determinants of the presence of cognitive impairment in relapsing-remitting MS patients were: duration of illness (OR = 1.053, 95% CI = 1.010-1.097, p = 0.015, Expanded Disability Status Scale score (OR = 1.247, 95% CI = 1.024-1.517, p = 0.028, and vocabulary (OR = 0.960, 95% CI = 0.936-0.984, p = 0.001, while in the smaller group of progressive MS patients these predictors did not play a significant role in determining the cognitive outcome. CONCLUSIONS: Our results corroborate the evidence about the presence and the nature of cognitive impairment in a large sample of patients with MS. Furthermore, our findings identify significant clinical and demographic determinants of cognitive impairment in a large sample of MS patients for the first time. Implications for further research and clinical practice were discussed.

  19. Superwind Outflows in Seyfert Galaxies? : Large-Scale Radio Maps of an Edge-On Sample

    Science.gov (United States)

    Colbert, E.; Gallimore, J.; Baum, S.; O'Dea, C.

    1995-03-01

    Large-scale galactic winds (superwinds) are commonly found flowing out of the nuclear region of ultraluminous infrared and powerful starburst galaxies. Stellar winds and supernovae from the nuclear starburst provide the energy to drive these superwinds. The outflowing gas escapes along the rotation axis, sweeping up and shock-heating clouds in the halo, which produces optical line emission, radio synchrotron emission, and X-rays. These features can most easily be studied in edge-on systems, so that the wind emission is not confused by that from the disk. We have begun a systematic search for superwind outflows in Seyfert galaxies. In an earlier optical emission-line survey, we found extended minor axis emission and/or double-peaked emission line profiles in >~30% of the sample objects. We present here large-scale (6cm VLA C-config) radio maps of 11 edge-on Seyfert galaxies, selected (without bias) from a distance-limited sample of 23 edge-on Seyferts. These data have been used to estimate the frequency of occurrence of superwinds. Preliminary results indicate that four (36%) of the 11 objects observed and six (26%) of the 23 objects in the distance-limited sample have extended radio emission oriented perpendicular to the galaxy disk. This emission may be produced by a galactic wind blowing out of the disk. Two (NGC 2992 and NGC 5506) of the nine objects for which we have both radio and optical data show good evidence for a galactic wind in both datasets. We suggest that galactic winds occur in >~30% of all Seyferts. A goal of this work is to find a diagnostic that can be used to distinguish between large-scale outflows that are driven by starbursts and those that are driven by an AGN. The presence of starburst-driven superwinds in Seyferts, if established, would have important implications for the connection between starburst galaxies and AGN.

  20. Analysis of reflection-peak wavelengths of sampled fiber Bragg gratings with large chirp.

    Science.gov (United States)

    Zou, Xihua; Pan, Wei; Luo, Bin

    2008-09-10

    The reflection-peak wavelengths (RPWs) in the spectra of sampled fiber Bragg gratings with large chirp (SFBGs-LC) are theoretically investigated. Such RPWs are divided into two parts, the RPWs of equivalent uniform SFBGs (U-SFBGs) and the wavelength shift caused by the large chirp in the grating period (CGP). We propose a quasi-equivalent transform to deal with the CGP. That is, the CGP is transferred into quasi-equivalent phase shifts to directly derive the Fourier transform of the refractive index modulation. Then, in the case of both the direct and the inverse Talbot effect, the wavelength shift is obtained from the Fourier transform. Finally, the RPWs of SFBGs-LC can be achieved by combining the wavelength shift and the RPWs of equivalent U-SFBGs. Several simulations are shown to numerically confirm these predicted RPWs of SFBGs-LC.

  1. Sampling large landscapes with small-scale stratification-User's Manual

    Science.gov (United States)

    Bart, Jonathan

    2011-01-01

    This manual explains procedures for partitioning a large landscape into plots, assigning the plots to strata, and selecting plots in each stratum to be surveyed. These steps are referred to as the "sampling large landscapes (SLL) process." We assume that users of the manual have a moderate knowledge of ArcGIS and Microsoft ® Excel. The manual is written for a single user but in many cases, some steps will be carried out by a biologist designing the survey and some steps will be carried out by a quantitative assistant. Thus, the manual essentially may be passed back and forth between these users. The SLL process primarily has been used to survey birds, and we refer to birds as subjects of the counts. The process, however, could be used to count any objects. ®

  2. Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging

    Science.gov (United States)

    Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.

    2017-08-01

    Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.

  3. Large sample neutron activation analysis: establishment at CDTN/CNEN, Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Maria Angela de B.C., E-mail: menezes@cdtn.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Jacimovic, Radojko, E-mail: radojko.jacimovic@ijs.s [Jozef Stefan Institute, Ljubljana (Slovenia). Dept. of Environmental Sciences. Group for Radiochemistry and Radioecology

    2011-07-01

    In order to improve the application of the neutron activation technique at CDTN/CNEN, the large sample instrumental neutron activation analysis is being established, IAEA BRA 14798 and FAPEMIG APQ-01259-09 projects. This procedure, LS-INAA, usually requires special facilities for the activation as well as for the detection. However, the TRIGA Mark I IPR R1, CDTN/CNEN has not been adapted for the irradiation and the usual gamma spectrometry has being carried out. To start the establishment of the LS-INAA, a 5g sample - IAEA/Soil 7 reference material was analyzed by k{sub 0}-standardized method. This paper is about the detector efficiency over the volume source using KayWin v2.23 and ANGLE V3.0 software. (author)

  4. A study of diabetes mellitus within a large sample of Australian twins

    DEFF Research Database (Denmark)

    Condon, Julianne; Shaw, Joanne E; Luciano, Michelle

    2008-01-01

    with type 2 diabetes (T2D), 41 female pairs with gestational diabetes (GD), 5 pairs with impaired glucose tolerance (IGT) and one pair with MODY. Heritabilities of T1D, T2D and GD were all high, but our samples did not have the power to detect effects of shared environment unless they were very large......Twin studies of diabetes mellitus can help elucidate genetic and environmental factors in etiology and can provide valuable biological samples for testing functional hypotheses, for example using expression and methylation studies of discordant pairs. We searched the volunteer Australian Twin...... Registry (19,387 pairs) for twins with diabetes using disease checklists from nine different surveys conducted from 1980-2000. After follow-up questionnaires to the twins and their doctors to confirm diagnoses, we eventually identified 46 pairs where one or both had type 1 diabetes (T1D), 113 pairs...

  5. Sampling of finite elements for sparse recovery in large scale 3D electrical impedance tomography

    International Nuclear Information System (INIS)

    Javaherian, Ashkan; Moeller, Knut; Soleimani, Manuchehr

    2015-01-01

    This study proposes a method to improve performance of sparse recovery inverse solvers in 3D electrical impedance tomography (3D EIT), especially when the volume under study contains small-sized inclusions, e.g. 3D imaging of breast tumours. Initially, a quadratic regularized inverse solver is applied in a fast manner with a stopping threshold much greater than the optimum. Based on assuming a fixed level of sparsity for the conductivity field, finite elements are then sampled via applying a compressive sensing (CS) algorithm to the rough blurred estimation previously made by the quadratic solver. Finally, a sparse inverse solver is applied solely to the sampled finite elements, with the solution to the CS as its initial guess. The results show the great potential of the proposed CS-based sparse recovery in improving accuracy of sparse solution to the large-size 3D EIT. (paper)

  6. Psychometric Evaluation of the Thought–Action Fusion Scale in a Large Clinical Sample

    Science.gov (United States)

    Meyer, Joseph F.; Brown, Timothy A.

    2015-01-01

    This study examined the psychometric properties of the 19-item Thought–Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct. PMID:22315482

  7. Psychometric evaluation of the thought-action fusion scale in a large clinical sample.

    Science.gov (United States)

    Meyer, Joseph F; Brown, Timothy A

    2013-12-01

    This study examined the psychometric properties of the 19-item Thought-Action Fusion (TAF) Scale, a measure of maladaptive cognitive intrusions, in a large clinical sample (N = 700). An exploratory factor analysis (n = 300) yielded two interpretable factors: TAF Moral (TAF-M) and TAF Likelihood (TAF-L). A confirmatory bifactor analysis was conducted on the second portion of the sample (n = 400) to account for possible sources of item covariance using a general TAF factor (subsuming TAF-M) alongside the TAF-L domain-specific factor. The bifactor model provided an acceptable fit to the sample data. Results indicated that global TAF was more strongly associated with a measure of obsessive-compulsiveness than measures of general worry and depression, and the TAF-L dimension was more strongly related to obsessive-compulsiveness than depression. Overall, results support the bifactor structure of the TAF in a clinical sample and its close relationship to its neighboring obsessive-compulsiveness construct.

  8. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  9. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  10. A hard-to-read font reduces the framing effect in a large sample.

    Science.gov (United States)

    Korn, Christoph W; Ries, Juliane; Schalk, Lennart; Oganian, Yulia; Saalbach, Henrik

    2018-04-01

    How can apparent decision biases, such as the framing effect, be reduced? Intriguing findings within recent years indicate that foreign language settings reduce framing effects, which has been explained in terms of deeper cognitive processing. Because hard-to-read fonts have been argued to trigger deeper cognitive processing, so-called cognitive disfluency, we tested whether hard-to-read fonts reduce framing effects. We found no reliable evidence for an effect of hard-to-read fonts on four framing scenarios in a laboratory (final N = 158) and an online study (N = 271). However, in a preregistered online study with a rather large sample (N = 732), a hard-to-read font reduced the framing effect in the classic "Asian disease" scenario (in a one-sided test). This suggests that hard-read-fonts can modulate decision biases-albeit with rather small effect sizes. Overall, our findings stress the importance of large samples for the reliability and replicability of modulations of decision biases.

  11. In-situ high resolution particle sampling by large time sequence inertial spectrometry

    International Nuclear Information System (INIS)

    Prodi, V.; Belosi, F.

    1990-09-01

    In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)

  12. Neurocognitive impairment in a large sample of homeless adults with mental illness.

    Science.gov (United States)

    Stergiopoulos, V; Cusi, A; Bekele, T; Skosireva, A; Latimer, E; Schütz, C; Fernando, I; Rourke, S B

    2015-04-01

    This study examines neurocognitive functioning in a large, well-characterized sample of homeless adults with mental illness and assesses demographic and clinical factors associated with neurocognitive performance. A total of 1500 homeless adults with mental illness enrolled in the At Home Chez Soi study completed neuropsychological measures assessing speed of information processing, memory, and executive functioning. Sociodemographic and clinical data were also collected. Linear regression analyses were conducted to examine factors associated with neurocognitive performance. Approximately half of our sample met criteria for psychosis, major depressive disorder, and alcohol or substance use disorder, and nearly half had experienced severe traumatic brain injury. Overall, 72% of participants demonstrated cognitive impairment, including deficits in processing speed (48%), verbal learning (71%) and recall (67%), and executive functioning (38%). The overall statistical model explained 19.8% of the variance in the neurocognitive summary score, with reduced neurocognitive performance associated with older age, lower education, first language other than English or French, Black or Other ethnicity, and the presence of psychosis. Homeless adults with mental illness experience impairment in multiple neuropsychological domains. Much of the variance in our sample's cognitive performance remains unexplained, highlighting the need for further research in the mechanisms underlying cognitive impairment in this population. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Sampling data summary for the ninth run of the Large Slurry Fed Melter

    International Nuclear Information System (INIS)

    Sabatino, D.M.

    1983-01-01

    The ninth experimental run of the Large Slurry Fed Melter (LSFM) was completed June 27, 1983, after 63 days of continuous operation. During the run, the various melter and off-gas streams were sampled and analyzed to determine melter material balances and to characterize off-gas emissions. Sampling methods and preliminary results were reported earlier. The emphasis was on the chemical analyses of the off-gas entrainment, deposits, and scrubber liquid. The significant sampling results from the run are summarized below: Flushing the Frit 165 with Frit 131 without bubbler agitation required 3 to 4.5 melter volumes. The off-gas cesium concentration during feeding was on the order of 36 to 56 μgCs/scf. The cesium concentration in the melter plenum (based on air in leakage only) was on the order of 110 to 210 μgCs/scf. Using <1 micron as the cut point for semivolatile material 60% of the chloride, 35% of the sodium and less than 5% of the managanese and iron in the entrainment are present as semivolatiles. A material balance on the scrubber tank solids shows good agreement with entrainment data. An overall cesium balance using LSFM-9 data and the DWPF production rate indicates an emission of 0.11 mCi/yr of cesium from the DWPF off-gas. This is a factor of 27 less than the maximum allowable 3 mCi/yr

  14. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  15. Quantum circuit implementation of cyclic mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Seyfarth, Ulrich; Dittmann, Niklas; Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany)

    2013-07-01

    Complete sets of mutually unbiased bases (MUBs) play an important role in the areas of quantum state tomography and quantum cryptography. Sets which can be generated cyclically may eliminate certain side-channel attacks. To profit from the advantages of these MUBs we propose a method for deriving a quantum circuit that implements the generator of a set into an experimental setup. For some dimensions this circuit is minimal. The presented method is in principle applicable for a larger set of operations and generalizes recently published results.

  16. Characteristic properties of Fibonacci-based mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Seyfarth, Ulrich; Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany); Ranade, Kedar [Institut fuer Quantenphysik, Universitaet Ulm, Albert-Einstein-Allee 11, 89069 Ulm (Germany)

    2012-07-01

    Complete sets of mutually unbiased bases (MUBs) offer interesting applications in quantum information processing ranging from quantum cryptography to quantum state tomography. Different construction schemes provide different perspectives on these bases which are typically also deeply connected to various mathematical research areas. In this talk we discuss characteristic properties resulting from a recently established connection between construction methods for cyclic MUBs and Fibonacci polynomials. As a remarkable fact this connection leads to construction methods which do not involve any relations to mathematical properties of finite fields.

  17. On the mathematical foundations of mutually unbiased bases

    Science.gov (United States)

    Thas, Koen

    2018-02-01

    In order to describe a setting to handle Zauner's conjecture on mutually unbiased bases (MUBs) (stating that in C^d, a set of MUBs of the theoretical maximal size d + 1 exists only if d is a prime power), we pose some fundamental questions which naturally arise. Some of these questions have important consequences for the construction theory of (new) sets of maximal MUBs. Partial answers will be provided in particular cases; more specifically, we will analyze MUBs with associated operator groups that have nilpotence class 2, and consider MUBs of height 1. We will also confirm Zauner's conjecture for MUBs with associated finite nilpotent operator groups.

  18. Characterisation of large zooplankton sampled with two different gears during midwinter in Rijpfjorden, Svalbard

    Directory of Open Access Journals (Sweden)

    Błachowiak-Samołyk Katarzyna

    2017-12-01

    Full Text Available During a midwinter cruise north of 80°N to Rijpfjorden, Svalbard, the composition and vertical distribution of the zooplankton community were studied using two different samplers 1 a vertically hauled multiple plankton sampler (MPS; mouth area 0.25 m2, mesh size 200 μm and 2 a horizontally towed Methot Isaacs Kidd trawl (MIK; mouth area 3.14 m2, mesh size 1500 μm. Our results revealed substantially higher species diversity (49 taxa than if a single sampler (MPS: 38 taxa, MIK: 28 had been used. The youngest stage present (CIII of Calanus spp. (including C. finmarchicus and C. glacialis was sampled exclusively by the MPS, and the frequency of CIV copepodites in MPS was double that than in MIK samples. In contrast, catches of the CV-CVI copepodites of Calanus spp. were substantially higher in the MIK samples (3-fold and 5-fold higher for adult males and females, respectively. The MIK sampling clearly showed that the highest abundances of all three Thysanoessa spp. were in the upper layers, although there was a tendency for the larger-sized euphausiids to occur deeper. Consistent patterns for the vertical distributions of the large zooplankters (e.g. ctenophores, euphausiids collected by the MPS and MIK samplers provided more complete data on their abundances and sizes than obtained by the single net. Possible mechanisms contributing to the observed patterns of distribution, e.g. high abundances of both Calanus spp. and their predators (ctenophores and chaetognaths in the upper water layers during midwinter are discussed.

  19. The New Peabody Picture Vocabulary Test-III: An Illusion of Unbiased Assessment?

    Science.gov (United States)

    Stockman, Ida J

    2000-10-01

    This article examines whether changes in the ethnic minority composition of the standardization sample for the latest edition of the Peabody Picture Vocabulary Test (PPVT-III, Dunn & Dunn, 1997) can be used as the sole explanation for children's better test scores when compared to an earlier edition, the Peabody Picture Vocabulary Test-Revised (PPVT-R, Dunn & Dunn, 1981). Results from a comparative analysis of these two test editions suggest that other factors may explain improved performances. Among these factors are the number of words and age levels sampled, the types of words and pictures used, and characteristics of the standardization sample other than its ethnic minority composition. This analysis also raises questions regarding the usefulness of converting scores from one edition to the other and the type of criteria that could be used to evaluate whether the PPVT-III is an unbiased test of vocabulary for children from diverse cultural and linguistic backgrounds.

  20. Ultrasensitive multiplex optical quantification of bacteria in large samples of biofluids

    Science.gov (United States)

    Pazos-Perez, Nicolas; Pazos, Elena; Catala, Carme; Mir-Simon, Bernat; Gómez-de Pedro, Sara; Sagales, Juan; Villanueva, Carlos; Vila, Jordi; Soriano, Alex; García de Abajo, F. Javier; Alvarez-Puebla, Ramon A.

    2016-01-01

    Efficient treatments in bacterial infections require the fast and accurate recognition of pathogens, with concentrations as low as one per milliliter in the case of septicemia. Detecting and quantifying bacteria in such low concentrations is challenging and typically demands cultures of large samples of blood (~1 milliliter) extending over 24–72 hours. This delay seriously compromises the health of patients. Here we demonstrate a fast microorganism optical detection system for the exhaustive identification and quantification of pathogens in volumes of biofluids with clinical relevance (~1 milliliter) in minutes. We drive each type of bacteria to accumulate antibody functionalized SERS-labelled silver nanoparticles. Particle aggregation on the bacteria membranes renders dense arrays of inter-particle gaps in which the Raman signal is exponentially amplified by several orders of magnitude relative to the dispersed particles. This enables a multiplex identification of the microorganisms through the molecule-specific spectral fingerprints. PMID:27364357

  1. A Survey for Spectroscopic Binaries in a Large Sample of G Dwarfs

    Science.gov (United States)

    Udry, S.; Mayor, M.; Latham, D. W.; Stefanik, R. P.; Torres, G.; Mazeh, T.; Goldberg, D.; Andersen, J.; Nordstrom, B.

    For more than 5 years now, the radial velocities for a large sample of G dwarfs (3,347 stars) have been monitored in order to obtain an unequaled set of orbital parameters for solar-type stars (~400 orbits, up to now). This survey provides a considerable improvement on the classical systematic study by Duquennoy and Mayor (1991; DM91). The observational part of the survey has been carried out in the context of a collaboration between the Geneva Observatory on the two coravel spectrometers for the southern sky and CfA at Oakridge and Whipple Observatories for the northern sky. As a first glance at these new results, we will address in this contribution a special aspect of the orbital eccentricity distribution, namely the disappearance of the void observed in DM91 for quasi-circular orbits with periods larger than 10 days.

  2. Automated, feature-based image alignment for high-resolution imaging mass spectrometry of large biological samples

    NARCIS (Netherlands)

    Broersen, A.; Liere, van R.; Altelaar, A.F.M.; Heeren, R.M.A.; McDonnell, L.A.

    2008-01-01

    High-resolution imaging mass spectrometry of large biological samples is the goal of several research groups. In mosaic imaging, the most common method, the large sample is divided into a mosaic of small areas that are then analyzed with high resolution. Here we present an automated alignment

  3. Toward Rapid Unattended X-ray Tomography of Large Planar Samples at 50-nm Resolution

    International Nuclear Information System (INIS)

    Rudati, J.; Tkachuk, A.; Gelb, J.; Hsu, G.; Feng, Y.; Pastrick, R.; Lyon, A.; Trapp, D.; Beetz, T.; Chen, S.; Hornberger, B.; Seshadri, S.; Kamath, S.; Zeng, X.; Feser, M.; Yun, W.; Pianetta, P.; Andrews, J.; Brennan, S.; Chu, Y. S.

    2009-01-01

    X-ray tomography at sub-50 nm resolution of small areas (∼15 μmx15 μm) are routinely performed with both laboratory and synchrotron sources. Optics and detectors for laboratory systems have been optimized to approach the theoretical efficiency limit. Limited by the availability of relatively low-brightness laboratory X-ray sources, exposure times for 3-D data sets at 50 nm resolution are still many hours up to a full day. However, for bright synchrotron sources, the use of these optimized imaging systems results in extremely short exposure times, approaching live-camera speeds at the Advanced Photon Source at Argonne National Laboratory near Chicago in the US These speeds make it possible to acquire a full tomographic dataset at 50 nm resolution in less than a minute of true X-ray exposure time. However, limits in the control and positioning system lead to large overhead that results in typical exposure times of ∼15 min currently.We present our work on the reduction and elimination of system overhead and toward complete automation of the data acquisition process. The enhancements underway are primarily to boost the scanning rate, sample positioning speed, and illumination homogeneity to performance levels necessary for unattended tomography of large areas (many mm 2 in size). We present first results on this ongoing project.

  4. Pattern transfer on large samples using a sub-aperture reactive ion beam

    Energy Technology Data Exchange (ETDEWEB)

    Miessler, Andre; Mill, Agnes; Gerlach, Juergen W.; Arnold, Thomas [Leibniz-Institut fuer Oberflaechenmodifizierung (IOM), Permoserstrasse 15, D-04318 Leipzig (Germany)

    2011-07-01

    In comparison to sole Ar ion beam sputtering Reactive Ion Beam Etching (RIBE) reveals the main advantage of increasing the selectivity for different kind of materials due to chemical contributions during the material removal. Therefore RIBE is qualified to be an excellent candidate for pattern transfer applications. The goal of the present study is to apply a sub-aperture reactive ion beam for pattern transfer on large fused silica samples. Concerning this matter, the etching behavior in the ion beam periphery plays a decisive role. Using CF{sub 4} as reactive gas, XPS measurements of the modified surface exposes impurities like Ni, Fe and Cr, which belongs to chemically eroded material of the plasma pot as well as an accumulation of carbon (up to 40 atomic percent) in the beam periphery, respectively. The substitution of CF{sub 4} by NF{sub 3} as reactive gas reveals a lot of benefits: more stable ion beam conditions in combination with a reduction of the beam size down to a diameter of 5 mm and a reduced amount of the Ni, Fe and Cr contaminations. However, a layer formation of silicon nitride handicaps the chemical contribution of the etching process. These negative side effects influence the transfer of trench structures on quartz by changing the selectivity due to altered chemical reaction of the modified resist layer. Concerning this we investigate the pattern transfer on large fused silica plates using NF{sub 3}-sub-aperture RIBE.

  5. Detecting superior face recognition skills in a large sample of young British adults

    Directory of Open Access Journals (Sweden)

    Anna Katarzyna Bobak

    2016-09-01

    Full Text Available The Cambridge Face Memory Test Long Form (CFMT+ and Cambridge Face Perception Test (CFPT are typically used to assess the face processing ability of individuals who believe they have superior face recognition skills. Previous large-scale studies have presented norms for the CFPT but not the CFMT+. However, previous research has also highlighted the necessity for establishing country-specific norms for these tests, indicating that norming data is required for both tests using young British adults. The current study addressed this issue in 254 British participants. In addition to providing the first norm for performance on the CFMT+ in any large sample, we also report the first UK specific cut-off for superior face recognition on the CFPT. Further analyses identified a small advantage for females on both tests, and only small associations between objective face recognition skills and self-report measures. A secondary aim of the study was to examine the relationship between trait or social anxiety and face processing ability, and no associations were noted. The implications of these findings for the classification of super-recognisers are discussed.

  6. Biased and unbiased perceptual decision-making on vocal emotions.

    Science.gov (United States)

    Dricu, Mihai; Ceravolo, Leonardo; Grandjean, Didier; Frühholz, Sascha

    2017-11-24

    Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.

  7. Unbiased classification of spatial strategies in the Barnes maze.

    Science.gov (United States)

    Illouz, Tomer; Madar, Ravit; Clague, Charlotte; Griffioen, Kathleen J; Louzoun, Yoram; Okun, Eitan

    2016-11-01

    Spatial learning is one of the most widely studied cognitive domains in neuroscience. The Morris water maze and the Barnes maze are the most commonly used techniques to assess spatial learning and memory in rodents. Despite the fact that these tasks are well-validated paradigms for testing spatial learning abilities, manual categorization of performance into behavioral strategies is subject to individual interpretation, and thus to bias. We have previously described an unbiased machine-learning algorithm to classify spatial strategies in the Morris water maze. Here, we offer a support vector machine-based, automated, Barnes-maze unbiased strategy (BUNS) classification algorithm, as well as a cognitive score scale that can be used for memory acquisition, reversal training and probe trials. The BUNS algorithm can greatly benefit Barnes maze users as it provides a standardized method of strategy classification and cognitive scoring scale, which cannot be derived from typical Barnes maze data analysis. Freely available on the web at http://okunlab.wix.com/okunlab as a MATLAB application. eitan.okun@biu.ac.ilSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    International Nuclear Information System (INIS)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K.

    2015-01-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  9. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    Energy Technology Data Exchange (ETDEWEB)

    Adekola, A.S.; Colaresi, J.; Douwen, J.; Jaederstroem, H.; Mueller, W.F.; Yocum, K.M.; Carmichael, K. [Canberra Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-07-01

    concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)

  10. Sampling based uncertainty analysis of 10% hot leg break LOCA in large scale test facility

    International Nuclear Information System (INIS)

    Sengupta, Samiran; Kraina, V.; Dubey, S. K.; Rao, R. S.; Gupta, S. K.

    2010-01-01

    Sampling based uncertainty analysis was carried out to quantify uncertainty in predictions of best estimate code RELAP5/MOD3.2 for a thermal hydraulic test (10% hot leg break LOCA) performed in the Large Scale Test Facility (LSTF) as a part of an IAEA coordinated research project. The nodalisation of the test facility was qualified for both steady state and transient level by systematically applying the procedures led by uncertainty methodology based on accuracy extrapolation (UMAE); uncertainty analysis was carried out using the Latin hypercube sampling (LHS) method to evaluate uncertainty for ten input parameters. Sixteen output parameters were selected for uncertainty evaluation and uncertainty band between 5 th and 95 th percentile of the output parameters were evaluated. It was observed that the uncertainty band for the primary pressure during two phase blowdown is larger than that of the remaining period. Similarly, a larger uncertainty band is observed relating to accumulator injection flow during reflood phase. Importance analysis was also carried out and standard rank regression coefficients were computed to quantify the effect of each individual input parameter on output parameters. It was observed that the break discharge coefficient is the most important uncertain parameter relating to the prediction of all the primary side parameters and that the steam generator (SG) relief pressure setting is the most important parameter in predicting the SG secondary pressure

  11. Prevalence of learned grapheme-color pairings in a large online sample of synesthetes.

    Directory of Open Access Journals (Sweden)

    Nathan Witthoft

    Full Text Available In this paper we estimate the minimum prevalence of grapheme-color synesthetes with letter-color matches learned from an external stimulus, by analyzing a large sample of English-speaking grapheme-color synesthetes. We find that at least 6% (400/6588 participants of the total sample learned many of their matches from a widely available colored letter toy. Among those born in the decade after the toy began to be manufactured, the proportion of synesthetes with learned letter-color pairings approaches 15% for some 5-year periods. Among those born 5 years or more before it was manufactured, none have colors learned from the toy. Analysis of the letter-color matching data suggests the only difference between synesthetes with matches to the toy and those without is exposure to the stimulus. These data indicate learning of letter-color pairings from external contingencies can occur in a substantial fraction of synesthetes, and are consistent with the hypothesis that grapheme-color synesthesia is a kind of conditioned mental imagery.

  12. Large magnitude gridded ionization chamber for impurity identification in alpha emitting radioactive samples

    International Nuclear Information System (INIS)

    Santos, R.N. dos.

    1992-01-01

    This paper refers to a large magnitude gridded ionization chamber with high resolution used in the identification of α radioactive samples. The chamber and the electrode have been described in terms of their geometry and dimensions, as well as the best results listed accordingly. Several α emitting radioactive samples were used with a gas mixture of 90% Argon plus 10% Methane. We got α energy spectrum with resolution around 22,14 KeV in agreement to the best results available in the literature. The spectrum of α energy related to 92 U 233 was gotten using the ionization chamber mentioned in this work; several values were found which matched perfectly well adjustment curve of the chamber. Many other additional measures using different kinds of adjusted detectors were successfully obtained in order to confirm the results gotten in the experiments, thus leading to the identification of some elements of the 92 U 233 radioactive series. Such results show the possibility of using the chamber mentioned for measurements of α low activity contamination. (author)

  13. Large contribution of human papillomavirus in vaginal neoplastic lesions: a worldwide study in 597 samples.

    Science.gov (United States)

    Alemany, L; Saunier, M; Tinoco, L; Quirós, B; Alvarado-Cabrero, I; Alejo, M; Joura, E A; Maldonado, P; Klaustermeier, J; Salmerón, J; Bergeron, C; Petry, K U; Guimerà, N; Clavero, O; Murillo, R; Clavel, C; Wain, V; Geraets, D T; Jach, R; Cross, P; Carrilho, C; Molina, C; Shin, H R; Mandys, V; Nowakowski, A M; Vidal, A; Lombardi, L; Kitchener, H; Sica, A R; Magaña-León, C; Pawlita, M; Quint, W; Bravo, I G; Muñoz, N; de Sanjosé, S; Bosch, F X

    2014-11-01

    This work describes the human papillomavirus (HPV) prevalence and the HPV type distribution in a large series of vaginal intraepithelial neoplasia (VAIN) grades 2/3 and vaginal cancer worldwide. We analysed 189 VAIN 2/3 and 408 invasive vaginal cancer cases collected from 31 countries from 1986 to 2011. After histopathological evaluation of sectioned formalin-fixed paraffin-embedded samples, HPV DNA detection and typing was performed using the SPF-10/DNA enzyme immunoassay (DEIA)/LiPA25 system (version 1). A subset of 146 vaginal cancers was tested for p16(INK4a) expression, a cellular surrogate marker for HPV transformation. Prevalence ratios were estimated using multivariate Poisson regression with robust variance. HPV DNA was detected in 74% (95% confidence interval (CI): 70-78%) of invasive cancers and in 96% (95% CI: 92-98%) of VAIN 2/3. Among cancers, the highest detection rates were observed in warty-basaloid subtype of squamous cell carcinomas, and in younger ages. Concerning the type-specific distribution, HPV16 was the most frequently type detected in both precancerous and cancerous lesions (59%). p16(INK4a) overexpression was found in 87% of HPV DNA positive vaginal cancer cases. HPV was identified in a large proportion of invasive vaginal cancers and in almost all VAIN 2/3. HPV16 was the most common type detected. A large impact in the reduction of the burden of vaginal neoplastic lesions is expected among vaccinated cohorts. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Investigating sex differences in psychological predictors of snack intake among a large representative sample.

    Science.gov (United States)

    Adriaanse, Marieke A; Evers, Catharine; Verhoeven, Aukje A C; de Ridder, Denise T D

    2016-03-01

    It is often assumed that there are substantial sex differences in eating behaviour (e.g. women are more likely to be dieters or emotional eaters than men). The present study investigates this assumption in a large representative community sample while incorporating a comprehensive set of psychological eating-related variables. A community sample was employed to: (i) determine sex differences in (un)healthy snack consumption and psychological eating-related variables (e.g. emotional eating, intention to eat healthily); (ii) examine whether sex predicts energy intake from (un)healthy snacks over and above psychological variables; and (iii) investigate the relationship between psychological variables and snack intake for men and women separately. Snack consumption was assessed with a 7d snack diary; the psychological eating-related variables with questionnaires. Participants were members of an Internet survey panel that is based on a true probability sample of households in the Netherlands. Men and women (n 1292; 45 % male), with a mean age of 51·23 (sd 16·78) years and a mean BMI of 25·62 (sd 4·75) kg/m2. Results revealed that women consumed more healthy and less unhealthy snacks than men and they scored higher than men on emotional and restrained eating. Women also more often reported appearance and health-related concerns about their eating behaviour, but men and women did not differ with regard to external eating or their intentions to eat more healthily. The relationships between psychological eating-related variables and snack intake were similar for men and women, indicating that snack intake is predicted by the same variables for men and women. It is concluded that some small sex differences in psychological eating-related variables exist, but based on the present data there is no need for interventions aimed at promoting healthy eating to target different predictors according to sex.

  15. Assessing the Validity of Single-item Life Satisfaction Measures: Results from Three Large Samples

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E.

    2014-01-01

    Purpose The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS) - a more psychometrically established measure. Methods Two large samples from Washington (N=13,064) and Oregon (N=2,277) recruited by the Behavioral Risk Factor Surveillance System (BRFSS) and a representative German sample (N=1,312) recruited by the Germany Socio-Economic Panel (GSOEP) were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Results Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62 – 0.64; disattenuated r = 0.78 – 0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001 – 0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS were very small (average absolute difference = 0.015 −0.042). Conclusions Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use. PMID:24890827

  16. Assessing the validity of single-item life satisfaction measures: results from three large samples.

    Science.gov (United States)

    Cheung, Felix; Lucas, Richard E

    2014-12-01

    The present paper assessed the validity of single-item life satisfaction measures by comparing single-item measures to the Satisfaction with Life Scale (SWLS)-a more psychometrically established measure. Two large samples from Washington (N = 13,064) and Oregon (N = 2,277) recruited by the Behavioral Risk Factor Surveillance System and a representative German sample (N = 1,312) recruited by the Germany Socio-Economic Panel were included in the present analyses. Single-item life satisfaction measures and the SWLS were correlated with theoretically relevant variables, such as demographics, subjective health, domain satisfaction, and affect. The correlations between the two life satisfaction measures and these variables were examined to assess the construct validity of single-item life satisfaction measures. Consistent across three samples, single-item life satisfaction measures demonstrated substantial degree of criterion validity with the SWLS (zero-order r = 0.62-0.64; disattenuated r = 0.78-0.80). Patterns of statistical significance for correlations with theoretically relevant variables were the same across single-item measures and the SWLS. Single-item measures did not produce systematically different correlations compared to the SWLS (average difference = 0.001-0.005). The average absolute difference in the magnitudes of the correlations produced by single-item measures and the SWLS was very small (average absolute difference = 0.015-0.042). Single-item life satisfaction measures performed very similarly compared to the multiple-item SWLS. Social scientists would get virtually identical answer to substantive questions regardless of which measure they use.

  17. Economic and Humanistic Burden of Osteoarthritis: A Systematic Review of Large Sample Studies.

    Science.gov (United States)

    Xie, Feng; Kovic, Bruno; Jin, Xuejing; He, Xiaoning; Wang, Mengxiao; Silvestre, Camila

    2016-11-01

    Osteoarthritis (OA) consumes a significant amount of healthcare resources, and impairs the health-related quality of life (HRQoL) of patients. Previous reviews have consistently found substantial variations in the costs of OA across studies and countries. The comparability between studies was poor and limited the detection of the true differences between these studies. To review large sample studies on measuring the economic and/or humanistic burden of OA published since May 2006. We searched MEDLINE and EMBASE databases using comprehensive search strategies to identify studies reporting economic burden and HRQoL of OA. We included large sample studies if they had a sample size ≥1000 and measured the cost and/or HRQoL of OA. Reviewers worked independently and in duplicate, performing a cross-check between groups to verify agreement. Within- and between-group consolidation was performed to resolve discrepancies, with outstanding discrepancies being resolved by an arbitrator. The Kappa statistic was reported to assess the agreement between the reviewers. All costs were adjusted in their original currency to year 2015 using published inflation rates for the country where the study was conducted, and then converted to 2015 US dollars. A total of 651 articles were screened by title and abstract, 94 were reviewed in full text, and 28 were included in the final review. The Kappa value was 0.794. Twenty studies reported direct costs and nine reported indirect costs. The total annual average direct costs varied from US$1442 to US$21,335, both in USA. The annual average indirect costs ranged from US$238 to US$29,935. Twelve studies measured HRQoL using various instruments. The Short Form 12 version 2 scores ranged from 35.0 to 51.3 for the physical component, and from 43.5 to 55.0 for the mental component. Health utilities varied from 0.30 for severe OA to 0.77 for mild OA. Per-patient OA costs are considerable and a patient's quality of life remains poor. Variations in

  18. Association between time perspective and organic food consumption in a large sample of adults.

    Science.gov (United States)

    Bénard, Marc; Baudry, Julia; Méjean, Caroline; Lairon, Denis; Giudici, Kelly Virecoulon; Etilé, Fabrice; Reach, Gérard; Hercberg, Serge; Kesse-Guyot, Emmanuelle; Péneau, Sandrine

    2018-01-05

    Organic food intake has risen in many countries during the past decades. Even though motivations associated with such choice have been studied, psychological traits preceding these motivations have rarely been explored. Consideration of future consequences (CFC) represents the extent to which individuals consider future versus immediate consequences of their current behaviors. Consequently, a future oriented personality may be an important characteristic of organic food consumers. The objective was to analyze the association between CFC and organic food consumption in a large sample of the adult general population. In 2014, a sample of 27,634 participants from the NutriNet-Santé cohort study completed the CFC questionnaire and an Organic-Food Frequency questionnaire. For each food group (17 groups), non-organic food consumers were compared to organic food consumers across quartiles of the CFC using multiple logistic regressions. Moreover, adjusted means of proportions of organic food intakes out of total food intakes were compared between quartiles of the CFC. Analyses were adjusted for socio-demographic, lifestyle and dietary characteristics. Participants with higher CFC were more likely to consume organic food (OR quartile 4 (Q4) vs. Q1 = 1.88, 95% CI: 1.62, 2.20). Overall, future oriented participants were more likely to consume 14 food groups. The strongest associations were observed for starchy refined foods (OR = 1.78, 95% CI: 1.63, 1.94), and fruits and vegetables (OR = 1.74, 95% CI: 1.58, 1.92). The contribution of organic food intake out of total food intake was 33% higher in the Q4 compared to Q1. More precisely, the contribution of organic food consumed was higher in the Q4 for 16 food groups. The highest relative differences between Q4 and Q1 were observed for starchy refined foods (22%) and non-alcoholic beverages (21%). Seafood was the only food group without a significant difference. This study provides information on the personality of

  19. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  20. Unbiased reduced density matrices and electronic properties from full configuration interaction quantum Monte Carlo

    International Nuclear Information System (INIS)

    Overy, Catherine; Blunt, N. S.; Shepherd, James J.; Booth, George H.; Cleland, Deidre; Alavi, Ali

    2014-01-01

    Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems

  1. Unbiased water and methanol maser surveys of NGC 1333

    Energy Technology Data Exchange (ETDEWEB)

    Lyo, A-Ran; Kim, Jongsoo; Byun, Do-Young; Lee, Ho-Gyu, E-mail: arl@kasi.re.kr [Korea Astronomy and Space Science Institute, 776, Daedeokdae-ro Yuseong-gu, Daejeon 305-348 (Korea, Republic of)

    2014-11-01

    We present the results of unbiased 22 GHz H{sub 2}O water and 44 GHz class I CH{sub 3}OH methanol maser surveys in the central 7' × 10' area of NGC 1333 and two additional mapping observations of a 22 GHz water maser in a ∼3' × 3' area of the IRAS4A region. In the 22 GHz water maser survey of NGC 1333 with a sensitivity of σ ∼ 0.3 Jy, we confirmed the detection of masers toward H{sub 2}O(B) in the region of HH 7-11 and IRAS4B. We also detected new water masers located ∼20'' away in the western direction of IRAS4B or ∼25'' away in the southern direction of IRAS4A. We could not, however, find young stellar objects or molecular outflows associated with them. They showed two different velocity components of ∼0 and ∼16 km s{sup –1}, which are blue- and redshifted relative to the adopted systemic velocity of ∼7 km s{sup –1} for NGC 1333. They also showed time variabilities in both intensity and velocity from multi-epoch observations and an anti-correlation between the intensities of the blue- and redshifted velocity components. We suggest that the unidentified power source of these masers might be found in the earliest evolutionary stage of star formation, before the onset of molecular outflows. Finding this kind of water maser is only possible through an unbiased blind survey. In the 44 GHz methanol maser survey with a sensitivity of σ ∼ 0.5 Jy, we confirmed masers toward IRAS4A2 and the eastern shock region of IRAS2A. Both sources are also detected in 95 and 132 GHz methanol maser lines. In addition, we had new detections of methanol masers at 95 and 132 GHz toward IRAS4B. In terms of the isotropic luminosity, we detected methanol maser sources brighter than ∼5 × 10{sup 25} erg s{sup –1} from our unbiased survey.

  2. Spatio-temporal foreshock activity during stick-slip experiments of large rock samples

    Science.gov (United States)

    Tsujimura, Y.; Kawakata, H.; Fukuyama, E.; Yamashita, F.; Xu, S.; Mizoguchi, K.; Takizawa, S.; Hirano, S.

    2016-12-01

    Foreshock activity has sometimes been reported for large earthquakes, and has been roughly classified into the following two classes. For shallow intraplate earthquakes, foreshocks occurred in the vicinity of the mainshock hypocenter (e.g., Doi and Kawakata, 2012; 2013). And for intraplate subduction earthquakes, foreshock hypocenters migrated toward the mainshock hypocenter (Kato, et al., 2012; Yagi et al., 2014). To understand how foreshocks occur, it is useful to investigate the spatio-temporal activities of foreshocks in the laboratory experiments under controlled conditions. We have conducted stick-slip experiments by using a large-scale biaxial friction apparatus at NIED in Japan (e.g., Fukuyama et al., 2014). Our previous results showed that stick-slip events repeatedly occurred in a run, but only those later events were preceded by foreshocks. Kawakata et al. (2014) inferred that the gouge generated during the run was an important key for foreshock occurrence. In this study, we proceeded to carry out stick-slip experiments of large rock samples whose interface (fault plane) is 1.5 meter long and 0.5 meter wide. After some runs to generate fault gouge between the interface. In the current experiments, we investigated spatio-temporal activities of foreshocks. We detected foreshocks from waveform records of 3D array of piezo-electric sensors. Our new results showed that more than three foreshocks (typically about twenty) had occurred during each stick-slip event, in contrast to the few foreshocks observed during previous experiments without pre-existing gouge. Next, we estimated the hypocenter locations of the stick-slip events, and found that they were located near the opposite end to the loading point. In addition, we observed a migration of foreshock hypocenters toward the hypocenter of each stick-slip event. This suggests that the foreshock activity observed in our current experiments was similar to that for the interplate earthquakes in terms of the

  3. Study of a large rapid ashing apparatus and a rapid dry ashing method for biological samples and its application

    International Nuclear Information System (INIS)

    Jin Meisun; Wang Benli; Liu Wencang

    1988-04-01

    A large rapid-dry-ashing apparatus and a rapid ashing method for biological samples are described. The apparatus consists of specially made ashing furnace, gas supply system and temperature-programming control cabinet. The following adventages have been showed by ashing experiment with the above apparatus: (1) high speed of ashing and saving of electric energy; (2) The apparatus can ash a large amount of samples at a time; (3) The ashed sample is pure white (or spotless), loose and easily soluble with few content of residual char; (4) The fresh sample can also be ashed directly. The apparatus is suitable for ashing a large amount of the environmental samples containing low level radioactivity trace elements and the medical, food and agricultural research samples

  4. Evaluation of Inflammatory Markers in a Large Sample of Obstructive Sleep Apnea Patients without Comorbidities

    Directory of Open Access Journals (Sweden)

    Izolde Bouloukaki

    2017-01-01

    Full Text Available Systemic inflammation is important in obstructive sleep apnea (OSA pathophysiology and its comorbidity. We aimed to assess the levels of inflammatory biomarkers in a large sample of OSA patients and to investigate any correlation between these biomarkers with clinical and polysomnographic (PSG parameters. This was a cross-sectional study in which 2983 patients who had undergone a polysomnography for OSA diagnosis were recruited. Patients with known comorbidities were excluded. Included patients (n=1053 were grouped according to apnea-hypopnea index (AHI as mild, moderate, and severe. Patients with AHI < 5 served as controls. Demographics, PSG data, and levels of high-sensitivity C-reactive protein (hs-CRP, fibrinogen, erythrocyte sedimentation rate (ESR, and uric acid (UA were measured and compared between groups. A significant difference was found between groups in hs-CRP, fibrinogen, and UA. All biomarkers were independently associated with OSA severity and gender (p<0.05. Females had increased levels of hs-CRP, fibrinogen, and ESR (p<0.001 compared to men. In contrast, UA levels were higher in men (p<0.001. Our results suggest that inflammatory markers significantly increase in patients with OSA without known comorbidities and correlate with OSA severity. These findings may have important implications regarding OSA diagnosis, monitoring, treatment, and prognosis. This trial is registered with ClinicalTrials.gov number NCT03070769.

  5. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Using Co-Occurrence to Evaluate Belief Coherence in a Large Non Clinical Sample

    Science.gov (United States)

    Pechey, Rachel; Halligan, Peter

    2012-01-01

    Much of the recent neuropsychological literature on false beliefs (delusions) has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow “cohere” with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs). The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural). Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian’s coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance. PMID:23155383

  7. Explaining health care expenditure variation: large-sample evidence using linked survey and health administrative data.

    Science.gov (United States)

    Ellis, Randall P; Fiebig, Denzil G; Johar, Meliyanni; Jones, Glenn; Savage, Elizabeth

    2013-09-01

    Explaining individual, regional, and provider variation in health care spending is of enormous value to policymakers but is often hampered by the lack of individual level detail in universal public health systems because budgeted spending is often not attributable to specific individuals. Even rarer is self-reported survey information that helps explain this variation in large samples. In this paper, we link a cross-sectional survey of 267 188 Australians age 45 and over to a panel dataset of annual healthcare costs calculated from several years of hospital, medical and pharmaceutical records. We use this data to distinguish between cost variations due to health shocks and those that are intrinsic (fixed) to an individual over three years. We find that high fixed expenditures are positively associated with age, especially older males, poor health, obesity, smoking, cancer, stroke and heart conditions. Being foreign born, speaking a foreign language at home and low income are more strongly associated with higher time-varying expenditures, suggesting greater exposure to adverse health shocks. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Using co-occurrence to evaluate belief coherence in a large non clinical sample.

    Directory of Open Access Journals (Sweden)

    Rachel Pechey

    Full Text Available Much of the recent neuropsychological literature on false beliefs (delusions has tended to focus on individual or single beliefs, with few studies actually investigating the relationship or co-occurrence between different types of co-existing beliefs. Quine and Ullian proposed the hypothesis that our beliefs form an interconnected web in which the beliefs that make up that system must somehow "cohere" with one another and avoid cognitive dissonance. As such beliefs are unlikely to be encapsulated (i.e., exist in isolation from other beliefs. The aim of this preliminary study was to empirically evaluate the probability of belief co-occurrence as one indicator of coherence in a large sample of subjects involving three different thematic sets of beliefs (delusion-like, paranormal & religious, and societal/cultural. Results showed that the degree of belief co-endorsement between beliefs within thematic groupings was greater than random occurrence, lending support to Quine and Ullian's coherentist account. Some associations, however, were relatively weak, providing for well-established examples of cognitive dissonance.

  9. CHRONICITY OF DEPRESSION AND MOLECULAR MARKERS IN A LARGE SAMPLE OF HAN CHINESE WOMEN.

    Science.gov (United States)

    Edwards, Alexis C; Aggen, Steven H; Cai, Na; Bigdeli, Tim B; Peterson, Roseann E; Docherty, Anna R; Webb, Bradley T; Bacanu, Silviu-Alin; Flint, Jonathan; Kendler, Kenneth S

    2016-04-25

    Major depressive disorder (MDD) has been associated with changes in mean telomere length and mitochondrial DNA (mtDNA) copy number. This study investigates if clinical features of MDD differentially impact these molecular markers. Data from a large, clinically ascertained sample of Han Chinese women with recurrent MDD were used to examine whether symptom presentation, severity, and comorbidity were related to salivary telomere length and/or mtDNA copy number (maximum N = 5,284 for both molecular and phenotypic data). Structural equation modeling revealed that duration of longest episode was positively associated with mtDNA copy number, while earlier age of onset of most severe episode and a history of dysthymia were associated with shorter telomeres. Other factors, such as symptom presentation, family history of depression, and other comorbid internalizing disorders, were not associated with these molecular markers. Chronicity of depressive symptoms is related to more pronounced telomere shortening and increased mtDNA copy number among individuals with a history of recurrent MDD. As these molecular markers have previously been implicated in physiological aging and morbidity, individuals who experience prolonged depressive symptoms are potentially at greater risk of adverse medical outcomes. © 2016 Wiley Periodicals, Inc.

  10. BROAD ABSORPTION LINE DISAPPEARANCE ON MULTI-YEAR TIMESCALES IN A LARGE QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Filiz Ak, N.; Brandt, W. N.; Schneider, D. P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario M3J 1P3 (Canada); Anderson, S. F.; Gibson, R. R. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States); Lundgren, B. F. [Department of Physics, Yale University, New Haven, CT 06511 (United States); Myers, A. D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Petitjean, P. [Institut d' Astrophysique de Paris, Universite Paris 6, F-75014, Paris (France); Ross, Nicholas P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 92420 (United States); Shen Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-51, Cambridge, MA 02138 (United States); York, D. G. [Department of Astronomy and Astrophysics, and Enrico Fermi Institute, University of Chicago, 5640 S. Ellis Ave., Chicago, IL 60637 (United States); Bizyaev, D.; Brinkmann, J.; Malanushenko, E.; Oravetz, D. J.; Pan, K.; Simmons, A. E. [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349-0059 (United States); Weaver, B. A., E-mail: nfilizak@astro.psu.edu [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2012-10-01

    We present 21 examples of C IV broad absorption line (BAL) trough disappearance in 19 quasars selected from systematic multi-epoch observations of 582 bright BAL quasars (1.9 < z < 4.5) by the Sloan Digital Sky Survey-I/II (SDSS-I/II) and SDSS-III. The observations span 1.1-3.9 yr rest-frame timescales, longer than have been sampled in many previous BAL variability studies. On these timescales, Almost-Equal-To 2.3% of C IV BAL troughs disappear and Almost-Equal-To 3.3% of BAL quasars show a disappearing trough. These observed frequencies suggest that many C IV BAL absorbers spend on average at most a century along our line of sight to their quasar. Ten of the 19 BAL quasars showing C IV BAL disappearance have apparently transformed from BAL to non-BAL quasars; these are the first reported examples of such transformations. The BAL troughs that disappear tend to be those with small-to-moderate equivalent widths, relatively shallow depths, and high outflow velocities. Other non-disappearing C IV BALs in those nine objects having multiple troughs tend to weaken when one of them disappears, indicating a connection between the disappearing and non-disappearing troughs, even for velocity separations as large as 10,000-15,000 km s{sup -1}. We discuss possible origins of this connection including disk-wind rotation and changes in shielding gas.

  11. Pyroelectric photovoltaic spatial solitons in unbiased photorefractive crystals

    International Nuclear Information System (INIS)

    Jiang, Qichang; Su, Yanli; Ji, Xuanmang

    2012-01-01

    A new type of spatial solitons i.e. pyroelectric photovoltaic spatial solitons based on the combination of pyroelectric and photovoltaic effect is predicted theoretically. It shows that bright, dark and grey spatial solitons can exist in unbiased photovoltaic photorefractive crystals with appreciable pyroelectric effect. Especially, the bright soliton can form in self-defocusing photovoltaic crystals if it gives larger self-focusing pyroelectric effect. -- Highlights: ► A new type of spatial soliton i.e. pyroelectric photovoltaic spatial soliton is predicted. ► The bright, dark and grey pyroelectric photovoltaic spatial soliton can form. ► The bright soliton can also exist in self-defocusing photovoltaic crystals.

  12. Unextendible Mutually Unbiased Bases (after Mandayam, Bandyopadhyay, Grassl and Wootters

    Directory of Open Access Journals (Sweden)

    Koen Thas

    2016-11-01

    Full Text Available We consider questions posed in a recent paper of Mandayam et al. (2014 on the nature of “unextendible mutually unbiased bases.” We describe a conceptual framework to study these questions, using a connection proved by the author in Thas (2009 between the set of nonidentity generalized Pauli operators on the Hilbert space of N d-level quantum systems, d a prime, and the geometry of non-degenerate alternating bilinear forms of rank N over finite fields F d . We then supply alternative and short proofs of results obtained in Mandayam et al. (2014, as well as new general bounds for the problems considered in loc. cit. In this setting, we also solve Conjecture 1 of Mandayam et al. (2014 and speculate on variations of this conjecture.

  13. Rethinking economy-wide rebound measures: An unbiased proposal

    International Nuclear Information System (INIS)

    Guerra, Ana-Isabel; Sancho, Ferran

    2010-01-01

    In spite of having been first introduced in the last half of the ninetieth century, the debate about the possible rebound effects from energy efficiency improvements is still an open question in the economic literature. This paper contributes to the existing research on this issue proposing an unbiased measure for economy-wide rebound effects. The novelty of this economy-wide rebound measure stems from the fact that not only actual energy savings but also potential energy savings are quantified under general equilibrium conditions. Our findings indicate that the use of engineering savings instead of general equilibrium potential savings downward biases economy-wide rebound effects and upward-biases backfire effects. The discrepancies between the traditional indicator and our proposed measure are analysed in the context of the Spanish economy.

  14. Unbiased determination of polarized parton distributions and their uncertainties

    CERN Document Server

    Ball, Richard D.; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan

    2013-01-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, ...

  15. A large-capacity sample-changer for automated gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Andeweg, A.H.

    1980-01-01

    An automatic sample-changer has been developed at the National Institute for Metallurgy for use in gamma-ray spectroscopy with a lithium-drifted germanium detector. The sample-changer features remote storage, which prevents cross-talk and reduces background. It has a capacity for 200 samples and a sample container that takes liquid or solid samples. The rotation and vibration of samples during counting ensure that powdered samples are compacted, and improve the precision and reproducibility of the counting geometry [af

  16. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  17. Association between genetic variation in a region on chromosome 11 and schizophrenia in large samples from Europe

    DEFF Research Database (Denmark)

    Rietschel, M; Mattheisen, M; Degenhardt, F

    2012-01-01

    the recruitment of very large samples of patients and controls (that is tens of thousands), or large, potentially more homogeneous samples that have been recruited from confined geographical areas using identical diagnostic criteria. Applying the latter strategy, we performed a genome-wide association study (GWAS...... between emotion regulation and cognition that is structurally and functionally abnormal in SCZ and bipolar disorder.Molecular Psychiatry advance online publication, 12 July 2011; doi:10.1038/mp.2011.80....

  18. Religion and the Unmaking of Prejudice toward Muslims: Evidence from a Large National Sample

    Science.gov (United States)

    Shaver, John H.; Troughton, Geoffrey; Sibley, Chris G.; Bulbulia, Joseph A.

    2016-01-01

    In the West, anti-Muslim sentiments are widespread. It has been theorized that inter-religious tensions fuel anti-Muslim prejudice, yet previous attempts to isolate sectarian motives have been inconclusive. Factors contributing to ambiguous results are: (1) failures to assess and adjust for multi-level denomination effects; (2) inattention to demographic covariates; (3) inadequate methods for comparing anti-Muslim prejudice relative to other minority group prejudices; and (4) ad hoc theories for the mechanisms that underpin prejudice and tolerance. Here we investigate anti-Muslim prejudice using a large national sample of non-Muslim New Zealanders (N = 13,955) who responded to the 2013 New Zealand Attitudes and Values Study. We address previous shortcomings by: (1) building Bayesian multivariate, multi-level regression models with denominations modeled as random effects; (2) including high-resolution demographic information that adjusts for factors known to influence prejudice; (3) simultaneously evaluating the relative strength of anti-Muslim prejudice by comparing it to anti-Arab prejudice and anti-immigrant prejudice within the same statistical model; and (4) testing predictions derived from the Evolutionary Lag Theory of religious prejudice and tolerance. This theory predicts that in countries such as New Zealand, with historically low levels of conflict, religion will tend to increase tolerance generally, and extend to minority religious groups. Results show that anti-Muslim and anti-Arab sentiments are confounded, widespread, and substantially higher than anti-immigrant sentiments. In support of the theory, the intensity of religious commitments was associated with a general increase in tolerance toward minority groups, including a poorly tolerated religious minority group: Muslims. Results clarify religion’s power to enhance tolerance in peaceful societies that are nevertheless afflicted by prejudice. PMID:26959976

  19. Personality traits and eating habits in a large sample of Estonians.

    Science.gov (United States)

    Mõttus, René; Realo, Anu; Allik, Jüri; Deary, Ian J; Esko, Tõnu; Metspalu, Andres

    2012-11-01

    Diet has health consequences, which makes knowing the psychological correlates of dietary habits important. Associations between dietary habits and personality traits were examined in a large sample of Estonians (N = 1,691) aged between 18 and 89 years. Dietary habits were measured using 11 items, which grouped into two factors reflecting (a) health aware and (b) traditional dietary patterns. The health aware diet factor was defined by eating more cereal and dairy products, fish, vegetables and fruits. The traditional diet factor was defined by eating more potatoes, meat and meat products, and bread. Personality was assessed by participants themselves and by people who knew them well. The questionnaire used was the NEO Personality Inventory-3, which measures the Five-Factor Model personality broad traits of Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness, along with six facets for each trait. Gender, age and educational level were controlled for. Higher scores on the health aware diet factor were associated with lower Neuroticism, and higher Extraversion, Openness and Conscientiousness (effect sizes were modest: r = .11 to 0.17 in self-ratings, and r = .08 to 0.11 in informant-ratings, ps < 0.01 or lower). Higher scores on the traditional diet factor were related to lower levels of Openness (r = -0.14 and -0.13, p < .001, self- and informant-ratings, respectively). Endorsement of healthy and avoidance of traditional dietary items are associated with people's personality trait levels, especially higher Openness. The results may inform dietary interventions with respect to possible barriers to diet change.

  20. Religion and the Unmaking of Prejudice toward Muslims: Evidence from a Large National Sample.

    Science.gov (United States)

    Shaver, John H; Troughton, Geoffrey; Sibley, Chris G; Bulbulia, Joseph A

    2016-01-01

    In the West, anti-Muslim sentiments are widespread. It has been theorized that inter-religious tensions fuel anti-Muslim prejudice, yet previous attempts to isolate sectarian motives have been inconclusive. Factors contributing to ambiguous results are: (1) failures to assess and adjust for multi-level denomination effects; (2) inattention to demographic covariates; (3) inadequate methods for comparing anti-Muslim prejudice relative to other minority group prejudices; and (4) ad hoc theories for the mechanisms that underpin prejudice and tolerance. Here we investigate anti-Muslim prejudice using a large national sample of non-Muslim New Zealanders (N = 13,955) who responded to the 2013 New Zealand Attitudes and Values Study. We address previous shortcomings by: (1) building Bayesian multivariate, multi-level regression models with denominations modeled as random effects; (2) including high-resolution demographic information that adjusts for factors known to influence prejudice; (3) simultaneously evaluating the relative strength of anti-Muslim prejudice by comparing it to anti-Arab prejudice and anti-immigrant prejudice within the same statistical model; and (4) testing predictions derived from the Evolutionary Lag Theory of religious prejudice and tolerance. This theory predicts that in countries such as New Zealand, with historically low levels of conflict, religion will tend to increase tolerance generally, and extend to minority religious groups. Results show that anti-Muslim and anti-Arab sentiments are confounded, widespread, and substantially higher than anti-immigrant sentiments. In support of the theory, the intensity of religious commitments was associated with a general increase in tolerance toward minority groups, including a poorly tolerated religious minority group: Muslims. Results clarify religion's power to enhance tolerance in peaceful societies that are nevertheless afflicted by prejudice.

  1. Relationships between anhedonia, suicidal ideation and suicide attempts in a large sample of physicians.

    Directory of Open Access Journals (Sweden)

    Gwenolé Loas

    Full Text Available The relationships between anhedonia and suicidal ideation or suicide attempts were explored in a large sample of physicians using the interpersonal psychological theory of suicide. We tested two hypotheses: firstly, that there is a significant relationship between anhedonia and suicidality and, secondly, that anhedonia could mediate the relationships between suicidal ideation or suicide attempts and thwarted belongingness or perceived burdensomeness.In a cross-sectional study, 557 physicians filled out several questionnaires measuring suicide risk, depression, using the abridged version of the Beck Depression Inventory (BDI-13, and demographic and job-related information. Ratings of anhedonia, perceived burdensomeness and thwarted belongingness were then extracted from the BDI-13 and the other questionnaires.Significant relationships were found between anhedonia and suicidal ideation or suicide attempts, even when significant variables or covariates were taken into account and, in particular, depressive symptoms. Mediation analyses showed significant partial or complete mediations, where anhedonia mediated the relationships between suicidal ideation (lifetime or recent and perceived burdensomeness or thwarted belongingness. For suicide attempts, complete mediation was found only between anhedonia and thwarted belongingness. When the different components of anhedonia were taken into account, dissatisfaction-not the loss of interest or work inhibition-had significant relationships with suicidal ideation, whereas work inhibition had significant relationships with suicide attempts.Anhedonia and its component of dissatisfaction could be a risk factor for suicidal ideation and could mediate the relationship between suicidal ideation and perceived burdensomeness or thwarted belongingness in physicians. Dissatisfaction, in particular in the workplace, may be explored as a strong predictor of suicidal ideation in physicians.

  2. The suicidality continuum in a large sample of obsessive-compulsive disorder (OCD) patients.

    Science.gov (United States)

    Velloso, P; Piccinato, C; Ferrão, Y; Aliende Perin, E; Cesar, R; Fontenelle, L; Hounie, A G; do Rosário, M C

    2016-10-01

    Obsessive-compulsive disorder (OCD) has a chronic course leading to huge impact in the patient's functioning. Suicidal thoughts and attempts are much more frequent in OCD subjects than once thought before. To empirically investigate whether the suicidal phenomena could be analyzed as a suicidality severity continuum and its association with obsessive-compulsive (OC) symptom dimensions and quality of life (QoL), in a large OCD sample. Cross-sectional study with 548 patients diagnosed with OCD according to the DSM-IV criteria, interviewed in the Brazilian OCD Consortium (C-TOC) sites. Patients were evaluated by OCD experts using standardized instruments including: Yale-Brown Obsessive-Compulsive Scale (YBOCS); Dimensional Yale-Brown Obsessive-Compulsive Scale (DYBOCS); Beck Depression and Anxiety Inventories; Structured Clinical Interview for DSM-IV (SCID); and the SF-36 QoL Health Survey. There were extremely high correlations between all the suicidal phenomena. OCD patients with suicidality had significantly lower QoL, higher severity in the "sexual/religious", "aggression" and "symmetry/ordering" OC symptom dimensions, higher BDI and BA scores and a higher frequency of suicide attempts in a family member. In the regression analysis, the factors that most impacted suicidality were the sexual dimension severity, the SF-36 QoL Mental Health domain, the severity of depressive symptoms and a relative with an attempted suicide history. Suicidality could be analyzed as a severity continuum and patients should be carefully monitored since they present with suicidal ideation. Lower QoL scores, higher scores on the sexual dimension and a family history of suicide attempts should be considered as risk factors for suicidality among OCD patients. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  3. Relationships between anhedonia, suicidal ideation and suicide attempts in a large sample of physicians

    Science.gov (United States)

    Lefebvre, Guillaume; Rotsaert, Marianne; Englert, Yvon

    2018-01-01

    Background The relationships between anhedonia and suicidal ideation or suicide attempts were explored in a large sample of physicians using the interpersonal psychological theory of suicide. We tested two hypotheses: firstly, that there is a significant relationship between anhedonia and suicidality and, secondly, that anhedonia could mediate the relationships between suicidal ideation or suicide attempts and thwarted belongingness or perceived burdensomeness. Methods In a cross-sectional study, 557 physicians filled out several questionnaires measuring suicide risk, depression, using the abridged version of the Beck Depression Inventory (BDI-13), and demographic and job-related information. Ratings of anhedonia, perceived burdensomeness and thwarted belongingness were then extracted from the BDI-13 and the other questionnaires. Results Significant relationships were found between anhedonia and suicidal ideation or suicide attempts, even when significant variables or covariates were taken into account and, in particular, depressive symptoms. Mediation analyses showed significant partial or complete mediations, where anhedonia mediated the relationships between suicidal ideation (lifetime or recent) and perceived burdensomeness or thwarted belongingness. For suicide attempts, complete mediation was found only between anhedonia and thwarted belongingness. When the different components of anhedonia were taken into account, dissatisfaction—not the loss of interest or work inhibition—had significant relationships with suicidal ideation, whereas work inhibition had significant relationships with suicide attempts. Conclusions Anhedonia and its component of dissatisfaction could be a risk factor for suicidal ideation and could mediate the relationship between suicidal ideation and perceived burdensomeness or thwarted belongingness in physicians. Dissatisfaction, in particular in the workplace, may be explored as a strong predictor of suicidal ideation in physicians

  4. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    International Nuclear Information System (INIS)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-01-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging

  5. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  6. Towards an unbiased comparison of CC, BCC, and FCC lattices in terms of prealiasing

    KAUST Repository

    Vad, Viktor

    2014-06-01

    In the literature on optimal regular volume sampling, the Body-Centered Cubic (BCC) lattice has been proven to be optimal for sampling spherically band-limited signals above the Nyquist limit. On the other hand, if the sampling frequency is below the Nyquist limit, the Face-Centered Cubic (FCC) lattice was demonstrated to be optimal in reducing the prealiasing effect. In this paper, we confirm that the FCC lattice is indeed optimal in this sense in a certain interval of the sampling frequency. By theoretically estimating the prealiasing error in a realistic range of the sampling frequency, we show that in other frequency intervals, the BCC lattice and even the traditional Cartesian Cubic (CC) lattice are expected to minimize the prealiasing. The BCC lattice is superior over the FCC lattice if the sampling frequency is not significantly below the Nyquist limit. Interestingly, if the original signal is drastically undersampled, the CC lattice is expected to provide the lowest prealiasing error. Additionally, we give a comprehensible clarification that the sampling efficiency of the FCC lattice is lower than that of the BCC lattice. Although this is a well-known fact, the exact percentage has been erroneously reported in the literature. Furthermore, for the sake of an unbiased comparison, we propose to rotate the Marschner-Lobb test signal such that an undue advantage is not given to either lattice. © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  7. Towards an unbiased comparison of CC, BCC, and FCC lattices in terms of prealiasing

    KAUST Repository

    Vad, Viktor; Csé bfalvi, Balá zs; Rautek, Peter; Grö ller, Eduard M.

    2014-01-01

    In the literature on optimal regular volume sampling, the Body-Centered Cubic (BCC) lattice has been proven to be optimal for sampling spherically band-limited signals above the Nyquist limit. On the other hand, if the sampling frequency is below the Nyquist limit, the Face-Centered Cubic (FCC) lattice was demonstrated to be optimal in reducing the prealiasing effect. In this paper, we confirm that the FCC lattice is indeed optimal in this sense in a certain interval of the sampling frequency. By theoretically estimating the prealiasing error in a realistic range of the sampling frequency, we show that in other frequency intervals, the BCC lattice and even the traditional Cartesian Cubic (CC) lattice are expected to minimize the prealiasing. The BCC lattice is superior over the FCC lattice if the sampling frequency is not significantly below the Nyquist limit. Interestingly, if the original signal is drastically undersampled, the CC lattice is expected to provide the lowest prealiasing error. Additionally, we give a comprehensible clarification that the sampling efficiency of the FCC lattice is lower than that of the BCC lattice. Although this is a well-known fact, the exact percentage has been erroneously reported in the literature. Furthermore, for the sake of an unbiased comparison, we propose to rotate the Marschner-Lobb test signal such that an undue advantage is not given to either lattice. © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  8. Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms

    International Nuclear Information System (INIS)

    Shangguan Danhua; Bao Jingdong

    2010-01-01

    We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.

  9. THE OPTICALLY UNBIASED GRB HOST (TOUGH) SURVEY. III. REDSHIFT DISTRIBUTION

    Energy Technology Data Exchange (ETDEWEB)

    Jakobsson, P.; Chapman, R.; Vreeswijk, P. M. [Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, 107 Reykjavik (Iceland); Hjorth, J.; Malesani, D.; Fynbo, J. P. U.; Milvang-Jensen, B. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, 2100 Copenhagen (Denmark); Tanvir, N. R.; Starling, R. L. C. [Department of Physics and Astronomy, University of Leicester, University Road, Leicester LE1 7RH (United Kingdom); Letawe, G. [Departement d' Astrophysique, Geophysique et Oceanographie, ULg, Allee du 6 aout, 17-Bat. B5c B-4000 Liege (Sart-Tilman) (Belgium)

    2012-06-10

    We present 10 new gamma-ray burst (GRB) redshifts and another five redshift limits based on host galaxy spectroscopy obtained as part of a large program conducted at the Very Large Telescope (VLT). The redshifts span the range 0.345 {<=} z {approx}< 2.54. Three of our measurements revise incorrect values from the literature. The homogeneous host sample researched here consists of 69 hosts that originally had a redshift completeness of 55% (with 38 out of 69 hosts having redshifts considered secure). Our project, including VLT/X-shooter observations reported elsewhere, increases this fraction to 77% (53/69), making the survey the most comprehensive in terms of redshift completeness of any sample to the full Swift depth, analyzed to date. We present the cumulative redshift distribution and derive a conservative, yet small, associated uncertainty. We constrain the fraction of Swift GRBs at high redshift to a maximum of 14% (5%) for z > 6 (z > 7). The mean redshift of the host sample is assessed to be (z) {approx}> 2.2, with the 10 new redshifts reducing it significantly. Using this more complete sample, we confirm previous findings that the GRB rate at high redshift (z {approx}> 3) appears to be in excess of predictions based on assumptions that it should follow conventional determinations of the star formation history of the universe, combined with an estimate of its likely metallicity dependence. This suggests that either star formation at high redshifts has been significantly underestimated, for example, due to a dominant contribution from faint, undetected galaxies, or that GRB production is enhanced in the conditions of early star formation, beyond that usually ascribed to lower metallicity.

  10. Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws

    Science.gov (United States)

    Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.

    2009-04-01

    Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W

  11. Prevalence of suicidal behaviour and associated factors in a large sample of Chinese adolescents.

    Science.gov (United States)

    Liu, X C; Chen, H; Liu, Z Z; Wang, J Y; Jia, C X

    2017-10-12

    Suicidal behaviour is prevalent among adolescents and is a significant predictor of future suicide attempts (SAs) and suicide death. Data on the prevalence and epidemiological characteristics of suicidal behaviour in Chinese adolescents are limited. This study was aimed to examine the prevalence, characteristics and risk factors of suicidal behaviour, including suicidal thought (ST), suicide plan (SP) and SA, in a large sample of Chinese adolescents. This report represents the first wave data of an ongoing longitudinal study, Shandong Adolescent Behavior and Health Cohort. Participants included 11 831 adolescent students from three counties of Shandong, China. The mean age of participants was 15.0 (s.d. = 1.5) and 51% were boys. In November-December 2015, participants completed a structured adolescent health questionnaire, including ST, SP and SA, characteristics of most recent SA, demographics, substance use, hopelessness, impulsivity and internalising and externalising behavioural problems. The lifetime and last-year prevalence rates were 17.6 and 10.7% for ST in males, 23.5 and 14.7% for ST in females, 8.9 and 2.9% for SP in males, 10.7 and 3.8% for SP in females, 3.4 and 1.3% for SA in males, and 4.6 and 1.8% for SA in females, respectively. The mean age of first SA was 12-13 years. Stabbing/cutting was the most common method to attempt suicide. Approximately 24% of male attempters and 16% of female attempters were medically treated. More than 70% of attempters had no preparatory action. Female gender, smoking, drinking, internalising and externalising problems, hopelessness, suicidal history of friends and acquaintances, poor family economic status and poor parental relationship were all significantly associated with increased risk of suicidal behaviour. Suicidal behaviour in Chinese adolescents is prevalent but less than that previously reported in Western peers. While females are more likely to attempt suicide, males are more likely to use lethal methods

  12. Sample-based Attribute Selective AnDE for Large Data

    DEFF Research Database (Denmark)

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Henc...

  13. High levels of absorption in orientation-unbiased, radio-selected 3CR Active Galaxies

    Science.gov (United States)

    Wilkes, Belinda J.; Haas, Martin; Barthel, Peter; Leipski, Christian; Kuraszkiewicz, Joanna; Worrall, Diana; Birkinshaw, Mark; Willner, Steven P.

    2014-08-01

    A critical problem in understanding active galaxies (AGN) is the separation of intrinsic physical differences from observed differences that are due to orientation. Obscuration of the active nucleus is anisotropic and strongly frequency dependent leading to complex selection effects for observations in most wavebands. These can only be quantified using a sample that is sufficiently unbiased to test orientation effects. Low-frequency radio emission is one way to select a close-to orientation-unbiased sample, albeit limited to the minority of AGN with strong radio emission.Recent Chandra, Spitzer and Herschel observations combined with multi-wavelength data for a complete sample of high-redshift (1half the sample is significantly obscured with ratios of unobscured: Compton thin (22 24.2) = 2.5:1.4:1 in these high-luminosity (log L(0.3-8keV) ~ 44-46) sources. These ratios are consistent with current expectations based on modelingthe Cosmic X-ray Background. A strong correlation with radio orientation constrains the geometry of the obscuring disk/torus to have a ~60 degree opening angle and ~12 degree Compton-thick cross-section. The deduced ~50% obscured fraction of the population contrasts with typical estimates of ~20% obscured in optically- and X-ray-selected high-luminosity samples. Once the primary nuclear emission is obscured, AGN X-ray spectra are frequently dominated by unobscured non-nuclear or scattered nuclear emission which cannot be distinguished from direct nuclear emission with a lower obscuration level unless high quality data is available. As a result, both the level of obscuration and the estimated instrinsic luminosities of highly-obscured AGN are likely to be significantly (*10-1000) underestimated for 25-50% of the population. This may explain the lower obscured fractions reported for optical and X-ray samples which have no independent measure of the AGN luminosity. Correcting AGN samples for these underestimated luminosities would result in

  14. Nutritional status and dental caries in a large sample of 4- and 5 ...

    African Journals Online (AJOL)

    Background. Evidence from studies involving small samples of children in Africa, India and South America suggests a higher dental caries rate in malnourished children. A comparison was done to evaluate wasting and stunting and their association with dental caries in four samples of South African children. Design.

  15. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  16. Mutually Unbiased Maximally Entangled Bases for the Bipartite System Cd⊗ C^{dk}

    Science.gov (United States)

    Nan, Hua; Tao, Yuan-Hong; Wang, Tian-Jiao; Zhang, Jun

    2016-10-01

    The construction of maximally entangled bases for the bipartite system Cd⊗ Cd is discussed firstly, and some mutually unbiased bases with maximally entangled bases are given, where 2≤ d≤5. Moreover, we study a systematic way of constructing mutually unbiased maximally entangled bases for the bipartite system Cd⊗ C^{dk}.

  17. Unbiased methods for removing systematics from galaxy clustering measurements

    Science.gov (United States)

    Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.

    2016-02-01

    Measuring the angular clustering of galaxies as a function of redshift is a powerful method for extracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterize and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to the basic mode projection algorithm, we prove it to be free of any bias, whereas we conclude that results computed with extended mode projection are biased. Within a simplified setup, we derive analytical expressions for the bias and discuss the options for correcting it in more realistic configurations. Common to all three methods is an increased estimator variance induced by the cleaning process, albeit at different levels. These results enable unbiased high-precision clustering measurements in the presence of spatially varying systematics, an essential step towards realizing the full potential of current and planned galaxy surveys.

  18. Unbiased determination of polarized parton distributions and their uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Ball, Richard D. [Tait Institute, University of Edinburgh, JCMB, KB, Mayfield Rd, Edinburgh EH9 3JZ, Scotland (United Kingdom); Forte, Stefano, E-mail: forte@mi.infn.it [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Guffanti, Alberto [The Niels Bohr International Academy and Discovery Center, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen (Denmark); Nocera, Emanuele R. [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ridolfi, Giovanni [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova, Genova (Italy); Rojo, Juan [PH Department, TH Unit, CERN, CH-1211 Geneva 23 (Switzerland)

    2013-09-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations.

  19. Unbiased determination of polarized parton distributions and their uncertainties

    International Nuclear Information System (INIS)

    Ball, Richard D.; Forte, Stefano; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan

    2013-01-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations

  20. Mutually unbiased bases and trinary operator sets for N qutrits

    International Nuclear Information System (INIS)

    Lawrence, Jay

    2004-01-01

    A compete orthonormal basis of N-qutrit unitary operators drawn from the Pauli group consists of the identity and 9 N -1 traceless operators. The traceless ones partition into 3 N +1 maximally commuting subsets (MCS's) of 3 N -1 operators each, whose joint eigenbases are mutually unbiased. We prove that Pauli factor groups of order 3 N are isomorphic to all MCS's and show how this result applies in specific cases. For two qutrits, the 80 traceless operators partition into 10 MCS's. We prove that 4 of the corresponding basis sets must be separable, while 6 must be totally entangled (and Bell-like). For three qutrits, 728 operators partition into 28 MCS's with less rigid structure, allowing for the coexistence of separable, partially entangled, and totally entangled (GHZ-like) bases. However a minimum of 16 GHZ-like bases must occur. Every basis state is described by an N-digit trinary number consisting of the eigenvalues of N observables constructed from the corresponding MCS

  1. SU2 nonstandard bases: the case of mutually unbiased bases

    International Nuclear Information System (INIS)

    Olivier, Albouy; Kibler, Maurice R.

    2007-02-01

    This paper deals with bases in a finite-dimensional Hilbert space. Such a space can be realized as a subspace of the representation space of SU 2 corresponding to an irreducible representation of SU 2 . The representation theory of SU 2 is reconsidered via the use of two truncated deformed oscillators. This leads to replace the familiar scheme [j 2 , j z ] by a scheme [j 2 , v ra ], where the two-parameter operator v ra is defined in the universal enveloping algebra of the Lie algebra su 2 . The eigenvectors of the commuting set of operators [j 2 , v ra ] are adapted to a tower of chains SO 3 includes C 2j+1 (2j belongs to N * ), where C 2j+1 is the cyclic group of order 2j + 1. In the case where 2j + 1 is prime, the corresponding eigenvectors generate a complete set of mutually unbiased bases. Some useful relations on generalized quadratic Gauss sums are exposed in three appendices. (authors)

  2. Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.

    Science.gov (United States)

    Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis

    2015-04-01

    Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Application of Conventional and K0-Based Internal Monostandard NAA Using Reactor Neutrons for Compositional Analysis of Large Samples

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.; Swain, K. K.; Pujari, P.K.

    2018-01-01

    Large sample neutron activation analysis (LSNAA) work was carried out for samples of coal, uranium ore, stainless steel, ancient and new clay potteries, dross and clay pottery replica from Peru using low flux high thermalized irradiation sites. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated using thermal column (TC) facility of Apsara reactor as well as graphite reflector position of critical facility (CF) at Bhabha Atomic Research Centre, Mumbai. Small size (10 - 500 mg) samples were also irradiated at core position of Apsara reactor, pneumatic carrier facility (PCF) of Dhruva reactor and pneumatic fast transfer facility (PFTS) of KAMINI reactor. Irradiation positions were characterized using indium flux monitor for TC and CF whereas multi monitors were used at other positions. Radioactive assay was carried out using high resolution gamma ray spectrometry. The k0-based internal monostandard NAA (IM-NAA) method was used to determine elemental concentration ratios with respect to Na in coal and uranium ore samples, Sc in pottery samples and Fe in stainless steel. Insitu relative detection efficiency for each irradiated sample was obtained using γ rays of activation products in the required energy range. Representative sample sizes were arrived at for coal and uranium ore from the plots of La/Na ratios as a function of the mass of the sample. For stainless steel sample of SS 304L, the absolute concentrations were calculated from concentration ratios by mass balance approach since all the major elements (Fe, Cr, Ni and Mn) were amenable to NAA. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. The La to Ce concentration ratios were used for preliminary grouping and concentration ratios of 15 elements with respect to Sc were used by statistical cluster analysis for confirmation of grouping. Concentrations of Au and Ag were determined in not so

  4. Elemental mapping of large samples by external ion beam analysis with sub-millimeter resolution and its applications

    Science.gov (United States)

    Silva, T. F.; Rodrigues, C. L.; Added, N.; Rizzutto, M. A.; Tabacniks, M. H.; Mangiarotti, A.; Curado, J. F.; Aguirre, F. R.; Aguero, N. F.; Allegro, P. R. P.; Campos, P. H. O. V.; Restrepo, J. M.; Trindade, G. F.; Antonio, M. R.; Assis, R. F.; Leite, A. R.

    2018-05-01

    The elemental mapping of large areas using ion beam techniques is a desired capability for several scientific communities, involved on topics ranging from geoscience to cultural heritage. Usually, the constraints for large-area mapping are not met in setups employing micro- and nano-probes implemented all over the world. A novel setup for mapping large sized samples in an external beam was recently built at the University of São Paulo employing a broad MeV-proton probe with sub-millimeter dimension, coupled to a high-precision large range XYZ robotic stage (60 cm range in all axis and precision of 5 μ m ensured by optical sensors). An important issue on large area mapping is how to deal with the irregularities of the sample's surface, that may introduce artifacts in the images due to the variation of the measuring conditions. In our setup, we implemented an automatic system based on machine vision to correct the position of the sample to compensate for its surface irregularities. As an additional benefit, a 3D digital reconstruction of the scanned surface can also be obtained. Using this new and unique setup, we have produced large-area elemental maps of ceramics, stones, fossils, and other sort of samples.

  5. Unbiased in-depth characterization of CEX fractions from a stressed monoclonal antibody by mass spectrometry.

    Science.gov (United States)

    Griaud, François; Denefeld, Blandine; Lang, Manuel; Hensinger, Héloïse; Haberl, Peter; Berg, Matthias

    2017-07-01

    Characterization of charge-based variants by mass spectrometry (MS) is required for the analytical development of a new biologic entity and its marketing approval by health authorities. However, standard peak-based data analysis approaches are time-consuming and biased toward the detection, identification, and quantification of main variants only. The aim of this study was to characterize in-depth acidic and basic species of a stressed IgG1 monoclonal antibody using comprehensive and unbiased MS data evaluation tools. Fractions collected from cation ion exchange (CEX) chromatography were analyzed as intact, after reduction of disulfide bridges, and after proteolytic cleavage using Lys-C. Data of both intact and reduced samples were evaluated consistently using a time-resolved deconvolution algorithm. Peptide mapping data were processed simultaneously, quantified and compared in a systematic manner for all MS signals and fractions. Differences observed between the fractions were then further characterized and assigned. Time-resolved deconvolution enhanced pattern visualization and data interpretation of main and minor modifications in 3-dimensional maps across CEX fractions. Relative quantification of all MS signals across CEX fractions before peptide assignment enabled the detection of fraction-specific chemical modifications at abundances below 1%. Acidic fractions were shown to be heterogeneous, containing antibody fragments, glycated as well as deamidated forms of the heavy and light chains. In contrast, the basic fractions contained mainly modifications of the C-terminus and pyroglutamate formation at the N-terminus of the heavy chain. Systematic data evaluation was performed to investigate multiple data sets and comprehensively extract main and minor differences between each CEX fraction in an unbiased manner.

  6. Large-scale prospective T cell function assays in shipped, unfrozen blood samples

    DEFF Research Database (Denmark)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J

    2014-01-01

    , for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within...... cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities...... North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T...

  7. Human blood RNA stabilization in samples collected and transported for a large biobank

    Science.gov (United States)

    2012-01-01

    Background The Norwegian Mother and Child Cohort Study (MoBa) is a nation-wide population-based pregnancy cohort initiated in 1999, comprising more than 108.000 pregnancies recruited between 1999 and 2008. In this study we evaluated the feasibility of integrating RNA analyses into existing MoBa protocols. We compared two different blood RNA collection tube systems – the PAXgene™ Blood RNA system and the Tempus™ Blood RNA system - and assessed the effects of suboptimal blood volumes in collection tubes and of transportation of blood samples by standard mail. Endpoints to characterize the samples were RNA quality and yield, and the RNA transcript stability of selected genes. Findings High-quality RNA could be extracted from blood samples stabilized with both PAXgene and Tempus tubes. The RNA yields obtained from the blood samples collected in Tempus tubes were consistently higher than from PAXgene tubes. Higher RNA yields were obtained from cord blood (3 – 4 times) compared to adult blood with both types of tubes. Transportation of samples by standard mail had moderate effects on RNA quality and RNA transcript stability; the overall RNA quality of the transported samples was high. Some unexplained changes in gene expression were noted, which seemed to correlate with suboptimal blood volumes collected in the tubes. Temperature variations during transportation may also be of some importance. Conclusions Our results strongly suggest that special collection tubes are necessary for RNA stabilization and they should be used for establishing new biobanks. We also show that the 50,000 samples collected in the MoBa biobank provide RNA of high quality and in sufficient amounts to allow gene expression analyses for studying the association of disease with altered patterns of gene expression. PMID:22988904

  8. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students

    OpenAIRE

    Song Wang; Ming Zhou; Taolin Chen; Xun Yang; Guangxiang Chen; Meiyun Wang; Qiyong Gong

    2017-01-01

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphome...

  9. Large-volume injection of sample diluents not miscible with the mobile phase as an alternative approach in sample preparation for bioanalysis: an application for fenspiride bioequivalence.

    Science.gov (United States)

    Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor

    2011-09-01

    Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.

  10. Social class and (un)ethical behavior : A framework, with evidence from a large population sample

    NARCIS (Netherlands)

    Trautmann, S.T.; van de Kuilen, G.; Zeckhauser, R.J.

    2013-01-01

    Differences in ethical behavior between members of the upper and lower classes have been at the center of civic debates in recent years. In this article, we present a framework for understanding how class affects ethical standards and behaviors. We apply the framework using data from a large Dutch

  11. Consistent associations between measures of psychological stress and CMV antibody levels in a large occupational sample

    NARCIS (Netherlands)

    Rector, J.L.; Dowd, J.B.; Loerbroks, A.; Burns, V.E.; Moss, P.A.; Jarczok, M.N.; Stalder, T.; Hoffman, K.; Fischer, J.E.; Bosch, J.A.

    2014-01-01

    Cytomegalovirus (CMV) is a herpes virus that has been implicated in biological aging and impaired health. Evidence, largely accrued from small-scale studies involving select populations, suggests that stress may promote non-clinical reactivation of this virus. However, absent is evidence from larger

  12. Water pollution screening by large-volume injection of aqueous samples and application to GC/MS analysis of a river Elbe sample

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, S.; Efer, J.; Engewald, W. [Leipzig Univ. (Germany). Inst. fuer Analytische Chemie

    1997-03-01

    The large-volume sampling of aqueous samples in a programmed temperature vaporizer (PTV) injector was used successfully for the target and non-target analysis of real samples. In this still rarely applied method, e.g., 1 mL of the water sample to be analyzed is slowly injected direct into the PTV. The vaporized water is eliminated through the split vent. The analytes are concentrated onto an adsorbent inside the insert and subsequently thermally desorbed. The capability of the method is demonstrated using a sample from the river Elbe. By means of coupling this method with a mass selective detector in SIM mode (target analysis) the method allows the determination of pollutants in the concentration range up to 0.01 {mu}g/L. Furthermore, PTV enrichment is an effective and time-saving method for non-target analysis in SCAN mode. In a sample from the river Elbe over 20 compounds were identified. (orig.) With 3 figs., 2 tabs.

  13. Large-sample neutron activation analysis in mass balance and nutritional studies

    NARCIS (Netherlands)

    van de Wiel, A.; Blaauw, Menno

    2018-01-01

    Low concentrations of elements in food can be measured with various techniques, mostly in small samples (mg). These techniques provide only reliable data when the element is distributed homogeneously in the material to be analysed either naturally or after a homogenisation procedure. When this is

  14. Large scale inference in the Infinite Relational Model: Gibbs sampling is not enough

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon; Moth, Andreas Leon Aagard; Mørup, Morten

    2013-01-01

    . We find that Gibbs sampling can be computationally scaled to handle millions of nodes and billions of links. Investigating the behavior of the Gibbs sampler for different sizes of networks we find that the mixing ability decreases drastically with the network size, clearly indicating a need...

  15. Predictive Value of Callous-Unemotional Traits in a Large Community Sample

    Science.gov (United States)

    Moran, Paul; Rowe, Richard; Flach, Clare; Briskman, Jacqueline; Ford, Tamsin; Maughan, Barbara; Scott, Stephen; Goodman, Robert

    2009-01-01

    Objective: Callous-unemotional (CU) traits in children and adolescents are increasingly recognized as a distinctive dimension of prognostic importance in clinical samples. Nevertheless, comparatively little is known about the longitudinal effects of these personality traits on the mental health of young people from the general population. Using a…

  16. Evaluating hypotheses in geolocation on a very large sample of Twitter

    DEFF Research Database (Denmark)

    Salehi, Bahar; Søgaard, Anders

    2017-01-01

    Recent work in geolocation has madeseveral hypotheses about what linguisticmarkers are relevant to detect where peoplewrite from. In this paper, we examinesix hypotheses against a corpus consistingof all geo-tagged tweets from theUS, or whose geo-tags could be inferred,in a 19% sample of Twitter...

  17. CASP10-BCL::Fold efficiently samples topologies of large proteins.

    Science.gov (United States)

    Heinze, Sten; Putnam, Daniel K; Fischer, Axel W; Kohlmann, Tim; Weiner, Brian E; Meiler, Jens

    2015-03-01

    During CASP10 in summer 2012, we tested BCL::Fold for prediction of free modeling (FM) and template-based modeling (TBM) targets. BCL::Fold assembles the tertiary structure of a protein from predicted secondary structure elements (SSEs) omitting more flexible loop regions early on. This approach enables the sampling of conformational space for larger proteins with more complex topologies. In preparation of CASP11, we analyzed the quality of CASP10 models throughout the prediction pipeline to understand BCL::Fold's ability to sample the native topology, identify native-like models by scoring and/or clustering approaches, and our ability to add loop regions and side chains to initial SSE-only models. The standout observation is that BCL::Fold sampled topologies with a GDT_TS score > 33% for 12 of 18 and with a topology score > 0.8 for 11 of 18 test cases de novo. Despite the sampling success of BCL::Fold, significant challenges still exist in clustering and loop generation stages of the pipeline. The clustering approach employed for model selection often failed to identify the most native-like assembly of SSEs for further refinement and submission. It was also observed that for some β-strand proteins model refinement failed as β-strands were not properly aligned to form hydrogen bonds removing otherwise accurate models from the pool. Further, BCL::Fold samples frequently non-natural topologies that require loop regions to pass through the center of the protein. © 2015 Wiley Periodicals, Inc.

  18. Accelerating inference for diffusions observed with measurement error and large sample sizes using approximate Bayesian computation

    DEFF Research Database (Denmark)

    Picchini, Umberto; Forman, Julie Lyng

    2016-01-01

    a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...

  19. Sample preparation and analysis of large 238PuO2 and ThO2 spheres

    International Nuclear Information System (INIS)

    Wise, R.L.; Selle, J.E.

    1975-01-01

    A program was initiated to determine the density gradient across a large spherical 238 PuO 2 sample produced by vacuum hot pressing. Due to the high thermal output of the ceramic a thin section was necessary to prevent overheating of the plastic mount. Techniques were developed for cross sectioning, mounting, grinding, and polishing of the sample. The polished samples were then analyzed on a quantitative image analyzer to determine the density as a function of location across the sphere. The techniques for indexing, analyzing, and reducing the data are described. Typical results obtained on a ThO 2 simulant sphere are given

  20. Sampling design in large-scale vegetation studies: Do not sacrifice ecological thinking to statistical purism!

    Czech Academy of Sciences Publication Activity Database

    Roleček, J.; Chytrý, M.; Hájek, Michal; Lvončík, S.; Tichý, L.

    2007-01-01

    Roč. 42, - (2007), s. 199-208 ISSN 1211-9520 R&D Projects: GA AV ČR IAA6163303; GA ČR(CZ) GA206/05/0020 Grant - others:GA AV ČR(CZ) KJB601630504 Institutional research plan: CEZ:AV0Z60050516 Keywords : Ecological methodology * Large-scale vegetation patterns * Macroecology Subject RIV: EF - Botanics Impact factor: 1.133, year: 2007

  1. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Directory of Open Access Journals (Sweden)

    Fanni Bányai

    Full Text Available Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD. Using the Bergen Social Media Addiction Scale (BSMAS and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  2. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  3. Problematic Social Media Use: Results from a Large-Scale Nationally Representative Adolescent Sample.

    Science.gov (United States)

    Bányai, Fanni; Zsila, Ágnes; Király, Orsolya; Maraz, Aniko; Elekes, Zsuzsanna; Griffiths, Mark D; Andreassen, Cecilie Schou; Demetrovics, Zsolt

    2017-01-01

    Despite social media use being one of the most popular activities among adolescents, prevalence estimates among teenage samples of social media (problematic) use are lacking in the field. The present study surveyed a nationally representative Hungarian sample comprising 5,961 adolescents as part of the European School Survey Project on Alcohol and Other Drugs (ESPAD). Using the Bergen Social Media Addiction Scale (BSMAS) and based on latent profile analysis, 4.5% of the adolescents belonged to the at-risk group, and reported low self-esteem, high level of depression symptoms, and elevated social media use. Results also demonstrated that BSMAS has appropriate psychometric properties. It is concluded that adolescents at-risk of problematic social media use should be targeted by school-based prevention and intervention programs.

  4. Sampling methods and non-destructive examination techniques for large radioactive waste packages

    International Nuclear Information System (INIS)

    Green, T.H.; Smith, D.L.; Burgoyne, K.E.; Maxwell, D.J.; Norris, G.H.; Billington, D.M.; Pipe, R.G.; Smith, J.E.; Inman, C.M.

    1992-01-01

    Progress is reported on work undertaken to evaluate quality checking methods for radioactive wastes. A sampling rig was designed, fabricated and used to develop techniques for the destructive sampling of cemented simulant waste using remotely operated equipment. An engineered system for the containment of cooling water was designed and manufactured and successfully demonstrated with the drum and coring equipment mounted in both vertical and horizontal orientations. The preferred in-cell orientation was found to be with the drum and coring machinery mounted in a horizontal position. Small powdered samples can be taken from cemented homogeneous waste cores using a hollow drill/vacuum section technique with the preferred subsampling technique being to discard the outer 10 mm layer to obtain a representative sample of the cement core. Cement blends can be dissolved using fusion techniques and the resulting solutions are stable to gelling for periods in excess of one year. Although hydrochloric acid and nitric acid are promising solvents for dissolution of cement blends, the resultant solutions tend to form silicic acid gels. An estimate of the beta-emitter content of cemented waste packages can be obtained by a combination of non-destructive and destructive techniques. The errors will probably be in excess of +/-60 % at the 95 % confidence level. Real-time X-ray video-imaging techniques have been used to analyse drums of uncompressed, hand-compressed, in-drum compacted and high-force compacted (i.e. supercompacted) simulant waste. The results have confirmed the applicability of this technique for NDT of low-level waste. 8 refs., 12 figs., 3 tabs

  5. Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov

    2006-01-01

    Adaptive decision making requires that contingencies between decision options and their relative assets be assessed accurately and quickly. The present research addresses the challenging notion that contingencies may be more visible from small than from large samples of observations. An algorithmic account for such a seemingly paradoxical effect…

  6. Calibration of UFBC counters and their performance in the assay of large mass plutonium samples

    International Nuclear Information System (INIS)

    Verrecchia, G.P.D.; Smith, B.G.R.; Cranston, R.

    1991-01-01

    This paper reports on the cross-calibration of four Universal Fast Breeder reactor assembly coincidence (UFBC) counters using multi-can containers of Plutonium oxide powders with masses between 2 and 12 Kg of plutonium and a parametric study on the sensitivity of the detector response to the positioning or removal and substitution of the material with empty cans. The paper also reports on the performance of the UFBC for routine measurements on large mass, multi-can containers of plutonium oxide powders and compares the results to experience previously obtained in the measurement of fast reactor type fuel assemblies in the mass range 2 to 16 Kg of plutonium

  7. Maternal bereavement and childhood asthma-analyses in two large samples of Swedish children.

    Directory of Open Access Journals (Sweden)

    Fang Fang

    Full Text Available Prenatal factors such as prenatal psychological stress might influence the development of childhood asthma.We assessed the association between maternal bereavement shortly before and during pregnancy, as a proxy for prenatal stress, and the risk of childhood asthma in the offspring, based on two samples of children 1-4 (n = 426,334 and 7-12 (n = 493,813 years assembled from the Swedish Medical Birth Register. Exposure was maternal bereavement of a close relative from one year before pregnancy to child birth. Asthma event was defined by a hospital contact for asthma or at least two dispenses of inhaled corticosteroids or montelukast. In the younger sample we calculated hazards ratios (HRs of a first-ever asthma event using Cox models and in the older sample odds ratio (ORs of an asthma attack during 12 months using logistic regression. Compared to unexposed boys, exposed boys seemed to have a weakly higher risk of first-ever asthma event at 1-4 years (HR: 1.09; 95% confidence interval [CI]: 0.98, 1.22 as well as an asthma attack during 12 months at 7-12 years (OR: 1.10; 95% CI: 0.96, 1.24. No association was suggested for girls. Boys exposed during the second trimester had a significantly higher risk of asthma event at 1-4 years (HR: 1.55; 95% CI: 1.19, 2.02 and asthma attack at 7-12 years if the bereavement was an older child (OR: 1.58; 95% CI: 1.11, 2.25. The associations tended to be stronger if the bereavement was due to a traumatic death compared to natural death, but the difference was not statistically significant.Our results showed some evidence for a positive association between prenatal stress and childhood asthma among boys but not girls.

  8. Test in a beam of large-area Micromegas chambers for sampling calorimetry

    CERN Document Server

    Adloff, C.; Dalmaz, A.; Drancourt, C.; Gaglione, R.; Geffroy, N.; Jacquemier, J.; Karyotakis, Y.; Koletsou, I.; Peltier, F.; Samarati, J.; Vouters, G.

    2014-06-11

    Application of Micromegas for sampling calorimetry puts specific constraints on the design and performance of this gaseous detector. In particular, uniform and linear response, low noise and stability against high ionisation density deposits are prerequisites to achieving good energy resolution. A Micromegas-based hadronic calorimeter was proposed for an application at a future linear collider experiment and three technologically advanced prototypes of 1$\\times$1 m$^{2}$ were constructed. Their merits relative to the above-mentioned criteria are discussed on the basis of measurements performed at the CERN SPS test-beam facility.

  9. Smoking and intention to quit among a large sample of black sexual and gender minorities.

    Science.gov (United States)

    Jordan, Jenna N; Everett, Kevin D; Ge, Bin; McElroy, Jane A

    2015-01-01

    The purpose of this study is to more completely quantify smoking and intention to quit from a sample of sexual and gender minority (SGM) Black individuals (N = 639) through analysis of data collected at Pride festivals and online. Frequencies described demographic characteristics; chi-square analyses were used to compare tobacco-related variables. Black SGM smokers were more likely to be trying to quit smoking than White SGM smokers. However, Black SGM individuals were less likely than White SGM individuals to become former smokers. The results of this study indicate that smoking behaviors may be heavily influenced by race after accounting for SGM status.

  10. Methods of pre-concentration of radionuclides from large volume samples

    International Nuclear Information System (INIS)

    Olahova, K.; Matel, L.; Rosskopfova, O.

    2006-01-01

    The development of radioanalytical methods for low level radionuclides in environmental samples is presented. In particular, emphasis is placed on the introduction of extraction chromatography as a tool for improving the quality of results as well as reducing the analysis time. However, the advantageous application of extraction chromatography often depends on the effective use of suitable preconcentration techniques, such as co-precipitation, to reduce the amount of matrix components which accompany the analysis interest. On-going investigations in this field relevant to the determination of environmental levels of actinides and 90 Sr are discussed. (authors)

  11. Monitoring a large number of pesticides and transformation products in water samples from Spain and Italy.

    Science.gov (United States)

    Rousis, Nikolaos I; Bade, Richard; Bijlsma, Lubertus; Zuccato, Ettore; Sancho, Juan V; Hernandez, Felix; Castiglioni, Sara

    2017-07-01

    Assessing the presence of pesticides in environmental waters is particularly challenging because of the huge number of substances used which may end up in the environment. Furthermore, the occurrence of pesticide transformation products (TPs) and/or metabolites makes this task even harder. Most studies dealing with the determination of pesticides in water include only a small number of analytes and in many cases no TPs. The present study applied a screening method for the determination of a large number of pesticides and TPs in wastewater (WW) and surface water (SW) from Spain and Italy. Liquid chromatography coupled to high-resolution mass spectrometry (HRMS) was used to screen a database of 450 pesticides and TPs. Detection and identification were based on specific criteria, i.e. mass accuracy, fragmentation, and comparison of retention times when reference standards were available, or a retention time prediction model when standards were not available. Seventeen pesticides and TPs from different classes (fungicides, herbicides and insecticides) were found in WW in Italy and Spain, and twelve in SW. Generally, in both countries more compounds were detected in effluent WW than in influent WW, and in SW than WW. This might be due to the analytical sensitivity in the different matrices, but also to the presence of multiple sources of pollution. HRMS proved a good screening tool to determine a large number of substances in water and identify some priority compounds for further quantitative analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Reinforced dynamics for enhanced sampling in large atomic and molecular systems

    Science.gov (United States)

    Zhang, Linfeng; Wang, Han; E, Weinan

    2018-03-01

    A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.

  13. Estimating Unbiased Land Cover Change Areas In The Colombian Amazon Using Landsat Time Series And Statistical Inference Methods

    Science.gov (United States)

    Arevalo, P. A.; Olofsson, P.; Woodcock, C. E.

    2017-12-01

    Unbiased estimation of the areas of conversion between land categories ("activity data") and their uncertainty is crucial for providing more robust calculations of carbon emissions to the atmosphere, as well as their removals. This is particularly important for the REDD+ mechanism of UNFCCC where an economic compensation is tied to the magnitude and direction of such fluxes. Dense time series of Landsat data and statistical protocols are becoming an integral part of forest monitoring efforts, but there are relatively few studies in the tropics focused on using these methods to advance operational MRV systems (Monitoring, Reporting and Verification). We present the results of a prototype methodology for continuous monitoring and unbiased estimation of activity data that is compliant with the IPCC Approach 3 for representation of land. We used a break detection algorithm (Continuous Change Detection and Classification, CCDC) to fit pixel-level temporal segments to time series of Landsat data in the Colombian Amazon. The segments were classified using a Random Forest classifier to obtain annual maps of land categories between 2001 and 2016. Using these maps, a biannual stratified sampling approach was implemented and unbiased stratified estimators constructed to calculate area estimates with confidence intervals for each of the stable and change classes. Our results provide evidence of a decrease in primary forest as a result of conversion to pastures, as well as increase in secondary forest as pastures are abandoned and the forest allowed to regenerate. Estimating areas of other land transitions proved challenging because of their very small mapped areas compared to stable classes like forest, which corresponds to almost 90% of the study area. Implications on remote sensing data processing, sample allocation and uncertainty reduction are also discussed.

  14. Patient-reported causes of heart failure in a large European sample

    DEFF Research Database (Denmark)

    Timmermans, Ivy; Denollet, Johan; Pedersen, Susanne S.

    2018-01-01

    ), psychosocial (35%, mainly (work-related) stress), and natural causes (32%, mainly heredity). There were socio-demographic, clinical and psychological group differences between the various categories, and large discrepancies between prevalence of physical risk factors according to medical records and patient...... distress (OR = 1.54, 95% CI = 0.94–2.51, p = 0.09), and behavioral causes and a less threatening view of heart failure (OR = 0.64, 95% CI = 0.40–1.01, p = 0.06). Conclusion: European patients most frequently reported comorbidities, smoking, stress, and heredity as heart failure causes, but their causal......Background: Patients diagnosed with chronic diseases develop perceptions about their disease and its causes, which may influence health behavior and emotional well-being. This is the first study to examine patient-reported causes and their correlates in patients with heart failure. Methods...

  15. Comprehensive metabolic characterization of serum osteocalcin action in a large non-diabetic sample.

    Directory of Open Access Journals (Sweden)

    Lukas Entenmann

    Full Text Available Recent research suggested a metabolic implication of osteocalcin (OCN in e.g. insulin sensitivity or steroid production. We used an untargeted metabolomics approach by analyzing plasma and urine samples of 931 participants using mass spectrometry to reveal further metabolic actions of OCN. Several detected relations between OCN and metabolites were strongly linked to renal function, however, a number of associations remained significant after adjustment for renal function. Intermediates of proline catabolism were associated with OCN reflecting the implication in bone metabolism. The association to kynurenine points towards a pro-inflammatory state with increasing OCN. Inverse relations with intermediates of branch-chained amino acid metabolism suggest a link to energy metabolism. Finally, urinary surrogate markers of smoking highlight its adverse effect on OCN metabolism. In conclusion, the present study provides a read-out of metabolic actions of OCN. However, most of the associations were weak arguing for a limited role of OCN in whole-body metabolism.

  16. Comprehensive metabolic characterization of serum osteocalcin action in a large non-diabetic sample.

    Science.gov (United States)

    Entenmann, Lukas; Pietzner, Maik; Artati, Anna; Hannemann, Anke; Henning, Ann-Kristin; Kastenmüller, Gabi; Völzke, Henry; Nauck, Matthias; Adamski, Jerzy; Wallaschofski, Henri; Friedrich, Nele

    2017-01-01

    Recent research suggested a metabolic implication of osteocalcin (OCN) in e.g. insulin sensitivity or steroid production. We used an untargeted metabolomics approach by analyzing plasma and urine samples of 931 participants using mass spectrometry to reveal further metabolic actions of OCN. Several detected relations between OCN and metabolites were strongly linked to renal function, however, a number of associations remained significant after adjustment for renal function. Intermediates of proline catabolism were associated with OCN reflecting the implication in bone metabolism. The association to kynurenine points towards a pro-inflammatory state with increasing OCN. Inverse relations with intermediates of branch-chained amino acid metabolism suggest a link to energy metabolism. Finally, urinary surrogate markers of smoking highlight its adverse effect on OCN metabolism. In conclusion, the present study provides a read-out of metabolic actions of OCN. However, most of the associations were weak arguing for a limited role of OCN in whole-body metabolism.

  17. Insights into a spatially embedded social network from a large-scale snowball sample

    Science.gov (United States)

    Illenberger, J.; Kowald, M.; Axhausen, K. W.; Nagel, K.

    2011-12-01

    Much research has been conducted to obtain insights into the basic laws governing human travel behaviour. While the traditional travel survey has been for a long time the main source of travel data, recent approaches to use GPS data, mobile phone data, or the circulation of bank notes as a proxy for human travel behaviour are promising. The present study proposes a further source of such proxy-data: the social network. We collect data using an innovative snowball sampling technique to obtain details on the structure of a leisure-contacts network. We analyse the network with respect to its topology, the individuals' characteristics, and its spatial structure. We further show that a multiplication of the functions describing the spatial distribution of leisure contacts and the frequency of physical contacts results in a trip distribution that is consistent with data from the Swiss travel survey.

  18. The presentation and preliminary validation of KIWEST using a large sample of Norwegian university staff.

    Science.gov (United States)

    Innstrand, Siw Tone; Christensen, Marit; Undebakke, Kirsti Godal; Svarva, Kyrre

    2015-12-01

    The aim of the present paper is to present and validate a Knowledge-Intensive Work Environment Survey Target (KIWEST), a questionnaire developed for assessing the psychosocial factors among people in knowledge-intensive work environments. The construct validity and reliability of the measurement model where tested on a representative sample of 3066 academic and administrative staff working at one of the largest universities in Norway. Confirmatory factor analysis provided initial support for the convergent validity and internal consistency of the 30 construct KIWEST measurement model. However, discriminant validity tests indicated that some of the constructs might overlap to some degree. Overall, the KIWEST measure showed promising psychometric properties as a psychosocial work environment measure. © 2015 the Nordic Societies of Public Health.

  19. Unbiased Strain-Typing of Arbovirus Directly from Mosquitoes Using Nanopore Sequencing: A Field-forward Biosurveillance Protocol.

    Science.gov (United States)

    Russell, Joseph A; Campos, Brittany; Stone, Jennifer; Blosser, Erik M; Burkett-Cadena, Nathan; Jacobs, Jonathan L

    2018-04-03

    The future of infectious disease surveillance and outbreak response is trending towards smaller hand-held solutions for point-of-need pathogen detection. Here, samples of Culex cedecei mosquitoes collected in Southern Florida, USA were tested for Venezuelan Equine Encephalitis Virus (VEEV), a previously-weaponized arthropod-borne RNA-virus capable of causing acute and fatal encephalitis in animal and human hosts. A single 20-mosquito pool tested positive for VEEV by quantitative reverse transcription polymerase chain reaction (RT-qPCR) on the Biomeme two3. The virus-positive sample was subjected to unbiased metatranscriptome sequencing on the Oxford Nanopore MinION and shown to contain Everglades Virus (EVEV), an alphavirus in the VEEV serocomplex. Our results demonstrate, for the first time, the use of unbiased sequence-based detection and subtyping of a high-consequence biothreat pathogen directly from an environmental sample using field-forward protocols. The development and validation of methods designed for field-based diagnostic metagenomics and pathogen discovery, such as those suitable for use in mobile "pocket laboratories", will address a growing demand for public health teams to carry out their mission where it is most urgent: at the point-of-need.

  20. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  1. Ice nucleating particles from a large-scale sampling network: insight into geographic and temporal variability

    Science.gov (United States)

    Schrod, Jann; Weber, Daniel; Thomson, Erik S.; Pöhlker, Christopher; Saturno, Jorge; Artaxo, Paulo; Curtius, Joachim; Bingemer, Heinz

    2017-04-01

    The number concentration of ice nucleating particles (INP) is an important, yet under quantified atmospheric parameter. The temporal and geographic extent of observations worldwide remains relatively small, with many regions of the world (even whole continents and oceans), almost completely unrepresented by observational data. Measurements at pristine sites are particularly rare, but all the more valuable because such observations are necessary to estimate the pre-industrial baseline of aerosol and cloud related parameters that are needed to better understand the climate system and forecast future scenarios. As a partner of BACCHUS we began in September 2014 to operate an INP measurement network of four sampling stations, with a global geographic distribution. The stations are located at unique sites reaching from the Arctic to the equator: the Amazonian Tall Tower Observatory ATTO in Brazil, the Observatoire Volcanologique et Sismologique on the island of Martinique in the Caribbean Sea, the Zeppelin Observatory at Svalbard in the Norwegian Arctic and the Taunus Observatory near Frankfurt, Germany. Since 2014 samples were collected regularly by electrostatic precipitation of aerosol particles onto silicon substrates. The INP on the substrate are activated and analyzed in the isothermal static diffusion chamber FRIDGE at temperatures between -20°C and -30°C and relative humidity with respect to ice from 115 to 135%. Here we present data from the years 2015 and 2016 from this novel INP network and from selected campaign-based measurements from remote sites, including the Mt. Kenya GAW station. Acknowledgements The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) project BACCHUS under grant agreement No 603445 and the Deutsche Forschungsgemeinschaft (DFG) under the Research Unit FOR 1525 (INUIT).

  2. The Brief Negative Symptom Scale (BNSS): Independent validation in a large sample of Italian patients with schizophrenia.

    Science.gov (United States)

    Mucci, A; Galderisi, S; Merlotti, E; Rossi, A; Rocca, P; Bucci, P; Piegari, G; Chieffi, M; Vignapiano, A; Maj, M

    2015-07-01

    The Brief Negative Symptom Scale (BNSS) was developed to address the main limitations of the existing scales for the assessment of negative symptoms of schizophrenia. The initial validation of the scale by the group involved in its development demonstrated good convergent and discriminant validity, and a factor structure confirming the two domains of negative symptoms (reduced emotional/verbal expression and anhedonia/asociality/avolition). However, only relatively small samples of patients with schizophrenia were investigated. Further independent validation in large clinical samples might be instrumental to the broad diffusion of the scale in clinical research. The present study aimed to examine the BNSS inter-rater reliability, convergent/discriminant validity and factor structure in a large Italian sample of outpatients with schizophrenia. Our results confirmed the excellent inter-rater reliability of the BNSS (the intraclass correlation coefficient ranged from 0.81 to 0.98 for individual items and was 0.98 for the total score). The convergent validity measures had r values from 0.62 to 0.77, while the divergent validity measures had r values from 0.20 to 0.28 in the main sample (n=912) and in a subsample without clinically significant levels of depression and extrapyramidal symptoms (n=496). The BNSS factor structure was supported in both groups. The study confirms that the BNSS is a promising measure for quantifying negative symptoms of schizophrenia in large multicenter clinical studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  3. "Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2011-09-01

    Full Text Available Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found not to have modeled the analyses to take account of the complex sample (Johnson & Elliott, 1998 even when publishing in highly-regarded journals. It is well known that failure to appropriately model the complex sample can substantially bias the results of the analysis. Examples presented in this paper highlight the risk of error of inference and mis-estimation of parameters from failure to analyze these data sets appropriately.

  4. Large-scale Samples Irradiation Facility at the IBR-2 Reactor in Dubna

    CERN Document Server

    Cheplakov, A P; Golubyh, S M; Kaskanov, G Ya; Kulagin, E N; Kukhtin, V V; Luschikov, V I; Shabalin, E P; León-Florián, E; Leroy, C

    1998-01-01

    The irradiation facility at the beam line no.3 of the IBR-2 reactor of the Frank Laboratory for Neutron Physics is described. The facility is aimed at irradiation studies of various objects with area up to 800 cm$^2$ both at cryogenic and ambient temperatures. The energy spectra of neutrons are reconstructed by the method of threshold detector activation. The neutron fluence and $\\gamma$ dose rates are measured by means of alanine and thermoluminescent dosimeters. The boron carbide and lead filters or $(n/\\gamma)$ converter provide beams of different ratio of doses induced by neutrons and photons. For the lead filter, the flux of fast neutrons with energy more than 0.1 MeV is $1.4 \\cdot 10^{10}$ \\fln and the neutron dose is about 96\\% of the total radiation dose. For the $(n/\\gamma)$ converter, the $\\gamma$ dose rate is $\\sim$500 Gy h$^{-1}$ which is about 85\\% of the total dose. The radiation hardness tests of GaAs electronics and materials for the ATLAS detector to be put into operation at the Large Hadron ...

  5. Automated flow cytometric analysis across large numbers of samples and cell types.

    Science.gov (United States)

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  6. Predicting violence and recidivism in a large sample of males on probation or parole.

    Science.gov (United States)

    Prell, Lettie; Vitacco, Michael J; Zavodny, Denis

    This study evaluated the utility of items and scales from the Iowa Violence and Victimization Instrument in a sample of 1961 males from the state of Iowa who were on probation or released from prison to parole supervision. This is the first study to examine the potential of the Iowa Violence and Victimization Instrument to predict criminal offenses. The males were followed for 30months immediately following their admission to probation or parole. AUC analyses indicated fair to good predictive power for the Iowa Violence and Victimization Instrument for charges of violence and victimization, but chance predictive power for drug offenses. Notably, both scales of the instrument performed equally well at the 30-month follow-up. Items on the Iowa Violence and Victimization Instrument not only predicted violence, but are straightforward to score. Violence management strategies are discussed as they relate to the current findings, including the potential to expand the measure to other jurisdictions and populations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Quantitative Examination of a Large Sample of Supra-Arcade Downflows in Eruptive Solar Flares

    Science.gov (United States)

    Savage, Sabrina L.; McKenzie, David E.

    2011-01-01

    Sunward-flowing voids above post-coronal mass ejection flare arcades were first discovered using the soft X-ray telescope aboard Yohkoh and have since been observed with TRACE (extreme ultraviolet (EUV)), SOHO/LASCO (white light), SOHO/SUMER (EUV spectra), and Hinode/XRT (soft X-rays). Supra-arcade downflow (SAD) observations suggest that they are the cross-sections of thin flux tubes retracting from a reconnection site high in the corona. Supra-arcade downflowing loops (SADLs) have also been observed under similar circumstances and are theorized to be SADs viewed from a perpendicular angle. Although previous studies have focused on dark flows because they are easier to detect and complementary spectral data analysis reveals their magnetic nature, the signal intensity of the flows actually ranges from dark to bright. This implies that newly reconnected coronal loops can contain a range of hot plasma density. Previous studies have presented detailed SAD observations for a small number of flares. In this paper, we present a substantial SADs and SADLs flare catalog. We have applied semiautomatic detection software to several of these events to detect and track individual downflows thereby providing statistically significant samples of parameters such as velocity, acceleration, area, magnetic flux, shrinkage energy, and reconnection rate. We discuss these measurements (particularly the unexpected result of the speeds being an order of magnitude slower than the assumed Alfven speed), how they were obtained, and potential impact on reconnection models.

  8. An empirical investigation of incompleteness in a large clinical sample of obsessive compulsive disorder.

    Science.gov (United States)

    Sibrava, Nicholas J; Boisseau, Christina L; Eisen, Jane L; Mancebo, Maria C; Rasmussen, Steven A

    2016-08-01

    Obsessive Compulsive Disorder (OCD) is a disorder with heterogeneous clinical presentations. To advance our understanding of this heterogeneity we investigated the prevalence and clinical features associated with incompleteness (INC), a putative underlying core feature of OCD. We predicted INC would be prominent in individuals with OCD and associated with greater severity and impairment. We examined the impact of INC in 307 adults with primary OCD. Participants with clinically significant INC (22.8% of the sample) had significantly greater OCD severity, greater rates of comorbidity, poorer ratings of functioning, lower quality of life, and higher rates of unemployment and disability. Participants with clinically significant INC were also more likely to be diagnosed with OCPD and to endorse symmetry/exactness obsessions and ordering/arranging compulsions than those who reported low INC. Our findings provide evidence that INC is associated with greater severity, comorbidity, and impairment, highlighting the need for improved assessment and treatment of INC in OCD. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Analysis of Three Compounds in Flos Farfarae by Capillary Electrophoresis with Large-Volume Sample Stacking

    Directory of Open Access Journals (Sweden)

    Hai-xia Yu

    2017-01-01

    Full Text Available The aim of this study was to develop a method combining an online concentration and high-efficiency capillary electrophoresis separation to analyze and detect three compounds (rutin, hyperoside, and chlorogenic acid in Flos Farfarae. In order to get good resolution and enrichment, several parameters such as the choice of running buffer, pH and concentration of the running buffer, organic modifier, temperature, and separation voltage were all investigated. The optimized conditions were obtained as follows: the buffer of 40 mM NaH2P04-40 mM Borax-30% v/v methanol (pH 9.0; the sample hydrodynamic injection of up to 4 s at 0.5 psi; 20 kV applied voltage. The diode-array detector was used, and the detection wavelength was 364 nm. Based on peak area, higher levels of selective and sensitive improvements in analysis were observed and about 14-, 26-, and 5-fold enrichment of rutin, hyperoside, and chlorogenic acid were achieved, respectively. This method was successfully applied to determine the three compounds in Flos Farfarae. The linear curve of peak response versus concentration was from 20 to 400 µg/ml, 16.5 to 330 µg/mL, and 25 to 500 µg/mL, respectively. The regression coefficients were 0.9998, 0.9999, and 0.9991, respectively.

  10. Adverse Childhood Environment: Relationship With Sexual Risk Behaviors and Marital Status in a Large American Sample.

    Science.gov (United States)

    Anderson, Kermyt G

    2017-01-01

    A substantial theoretical and empirical literature suggests that stressful events in childhood influence the timing and patterning of subsequent sexual and reproductive behaviors. Stressful childhood environments have been predicted to produce a life history strategy in which adults are oriented more toward short-term mating behaviors and less toward behaviors consistent with longevity. This article tests the hypothesis that adverse childhood environment will predict adult outcomes in two areas: risky sexual behavior (engagement in sexual risk behavior or having taken an HIV test) and marital status (currently married vs. never married, divorced, or a member of an unmarried couple). Data come from the Behavioral Risk Factor Surveillance System. The sample contains 17,530 men and 23,978 women aged 18-54 years living in 13 U.S. states plus the District of Columbia. Adverse childhood environment is assessed through 11 retrospective measures of childhood environment, including having grown up with someone who was depressed or mentally ill, who was an alcoholic, who used or abused drugs, or who served time in prison; whether one's parents divorced in childhood; and two scales measuring childhood exposure to violence and to sexual trauma. The results indicate that adverse childhood environment is associated with increased likelihood of engaging in sexual risk behaviors or taking an HIV test, and increased likelihood of being in an unmarried couple or divorced/separated, for both men and women. The predictions are supported by the data, lending further support to the hypothesis that childhood environments influence adult reproductive strategy.

  11. A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample

    Directory of Open Access Journals (Sweden)

    Yu-Yen Chang

    2012-01-01

    concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L-band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color (g-i is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color (i-J shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.

  12. Collision energy alteration during mass spectrometric acquisition is essential to ensure unbiased metabolomic analysis

    CSIR Research Space (South Africa)

    Madala, NE

    2012-08-01

    Full Text Available Metabolomics entails identification and quantification of all metabolites within a biological system with a given physiological status; as such, it should be unbiased. A variety of techniques are used to measure the metabolite content of living...

  13. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  14. Does higher education hone cognitive functioning and learning efficacy? Findings from a large and diverse sample

    Science.gov (United States)

    Guerra-Carrillo, Belén; Katovich, Kiefer

    2017-01-01

    Attending school is a multifaceted experience. Students are not only exposed to new knowledge but are also immersed in a structured environment in which they need to respond flexibly in accordance with changing task goals, keep relevant information in mind, and constantly tackle novel problems. To quantify the cumulative effect of this experience, we examined retrospectively and prospectively, the relationships between educational attainment and both cognitive performance and learning. We analyzed data from 196,388 subscribers to an online cognitive training program. These subscribers, ages 15–60, had completed eight behavioral assessments of executive functioning and reasoning at least once. Controlling for multiple demographic and engagement variables, we found that higher levels of education predicted better performance across the full age range, and modulated performance in some cognitive domains more than others (e.g., reasoning vs. processing speed). Differences were moderate for Bachelor’s degree vs. High School (d = 0.51), and large between Ph.D. vs. Some High School (d = 0.80). Further, the ages of peak cognitive performance for each educational category closely followed the typical range of ages at graduation. This result is consistent with a cumulative effect of recent educational experiences, as well as a decrement in performance as completion of schooling becomes more distant. To begin to characterize the directionality of the relationship between educational attainment and cognitive performance, we conducted a prospective longitudinal analysis. For a subset of 69,202 subscribers who had completed 100 days of cognitive training, we tested whether the degree of novel learning was associated with their level of education. Higher educational attainment predicted bigger gains, but the differences were small (d = 0.04–0.37). Altogether, these results point to the long-lasting trace of an effect of prior cognitive challenges but suggest that new

  15. Does higher education hone cognitive functioning and learning efficacy? Findings from a large and diverse sample.

    Science.gov (United States)

    Guerra-Carrillo, Belén; Katovich, Kiefer; Bunge, Silvia A

    2017-01-01

    Attending school is a multifaceted experience. Students are not only exposed to new knowledge but are also immersed in a structured environment in which they need to respond flexibly in accordance with changing task goals, keep relevant information in mind, and constantly tackle novel problems. To quantify the cumulative effect of this experience, we examined retrospectively and prospectively, the relationships between educational attainment and both cognitive performance and learning. We analyzed data from 196,388 subscribers to an online cognitive training program. These subscribers, ages 15-60, had completed eight behavioral assessments of executive functioning and reasoning at least once. Controlling for multiple demographic and engagement variables, we found that higher levels of education predicted better performance across the full age range, and modulated performance in some cognitive domains more than others (e.g., reasoning vs. processing speed). Differences were moderate for Bachelor's degree vs. High School (d = 0.51), and large between Ph.D. vs. Some High School (d = 0.80). Further, the ages of peak cognitive performance for each educational category closely followed the typical range of ages at graduation. This result is consistent with a cumulative effect of recent educational experiences, as well as a decrement in performance as completion of schooling becomes more distant. To begin to characterize the directionality of the relationship between educational attainment and cognitive performance, we conducted a prospective longitudinal analysis. For a subset of 69,202 subscribers who had completed 100 days of cognitive training, we tested whether the degree of novel learning was associated with their level of education. Higher educational attainment predicted bigger gains, but the differences were small (d = 0.04-0.37). Altogether, these results point to the long-lasting trace of an effect of prior cognitive challenges but suggest that new learning

  16. BROAD ABSORPTION LINE VARIABILITY ON MULTI-YEAR TIMESCALES IN A LARGE QUASAR SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Filiz Ak, N.; Brandt, W. N.; Schneider, D. P. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Hall, P. B. [Department of Physics and Astronomy, York University, 4700 Keele St., Toronto, Ontario, M3J 1P3 (Canada); Anderson, S. F. [Astronomy Department, University of Washington, Seattle, WA 98195 (United States); Hamann, F. [Department of Astronomy, University of Florida, Gainesville, FL 32611-2055 (United States); Lundgren, B. F. [Department of Astronomy, University of Wisconsin, Madison, WI 53706 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Pâris, I. [Departamento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Petitjean, P. [Universite Paris 6, Institut d' Astrophysique de Paris, 75014, Paris (France); Ross, Nicholas P. [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 92420 (United States); Shen, Yue [Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS-51, Cambridge, MA 02138 (United States); York, Don, E-mail: nfilizak@astro.psu.edu [The University of Chicago, Department of Astronomy and Astrophysics, Chicago, IL 60637 (United States)

    2013-11-10

    We present a detailed investigation of the variability of 428 C IV and 235 Si IV broad absorption line (BAL) troughs identified in multi-epoch observations of 291 quasars by the Sloan Digital Sky Survey-I/II/III. These observations primarily sample rest-frame timescales of 1-3.7 yr over which significant rearrangement of the BAL wind is expected. We derive a number of observational results on, e.g., the frequency of BAL variability, the velocity range over which BAL variability occurs, the primary observed form of BAL-trough variability, the dependence of BAL variability upon timescale, the frequency of BAL strengthening versus weakening, correlations between BAL variability and BAL-trough profiles, relations between C IV and Si IV BAL variability, coordinated multi-trough variability, and BAL variations as a function of quasar properties. We assess implications of these observational results for quasar winds. Our results support models where most BAL absorption is formed within an order-of-magnitude of the wind-launching radius, although a significant minority of BAL troughs may arise on larger scales. We estimate an average lifetime for a BAL trough along our line-of-sight of a few thousand years. BAL disappearance and emergence events appear to be extremes of general BAL variability, rather than being qualitatively distinct phenomena. We derive the parameters of a random-walk model for BAL EW variability, finding that this model can acceptably describe some key aspects of EW variability. The coordinated trough variability of BAL quasars with multiple troughs suggests that changes in 'shielding gas' may play a significant role in driving general BAL variability.

  17. Racialized risk environments in a large sample of people who inject drugs in the United States.

    Science.gov (United States)

    Cooper, Hannah L F; Linton, Sabriya; Kelley, Mary E; Ross, Zev; Wolfe, Mary E; Chen, Yen-Tyng; Zlotorzynska, Maria; Hunter-Jones, Josalin; Friedman, Samuel R; Des Jarlais, Don; Semaan, Salaam; Tempalski, Barbara; DiNenno, Elizabeth; Broz, Dita; Wejnert, Cyprian; Paz-Bailey, Gabriela

    2016-01-01

    Substantial racial/ethnic disparities exist in HIV infection among people who inject drugs (PWID) in many countries. To strengthen efforts to understand the causes of disparities in HIV-related outcomes and eliminate them, we expand the "Risk Environment Model" to encompass the construct "racialized risk environments," and investigate whether PWID risk environments in the United States are racialized. Specifically, we investigate whether black and Latino PWID are more likely than white PWID to live in places that create vulnerability to adverse HIV-related outcomes. As part of the Centers for Disease Control and Prevention's National HIV Behavioral Surveillance, 9170 PWID were sampled from 19 metropolitan statistical areas (MSAs) in 2009. Self-reported data were used to ascertain PWID race/ethnicity. Using Census data and other administrative sources, we characterized features of PWID risk environments at four geographic scales (i.e., ZIP codes, counties, MSAs, and states). Means for each feature of the risk environment were computed for each racial/ethnic group of PWID, and were compared across racial/ethnic groups. Almost universally across measures, black PWID were more likely than white PWID to live in environments associated with vulnerability to adverse HIV-related outcomes. Compared to white PWID, black PWID lived in ZIP codes with higher poverty rates and worse spatial access to substance abuse treatment and in counties with higher violent crime rates. Black PWID were less likely to live in states with laws facilitating sterile syringe access (e.g., laws permitting over-the-counter syringe sales). Latino/white differences in risk environments emerged at the MSA level (e.g., Latino PWID lived in MSAs with higher drug-related arrest rates). PWID risk environments in the US are racialized. Future research should explore the implications of this racialization for racial/ethnic disparities in HIV-related outcomes, using appropriate methods. Copyright © 2015

  18. Core belief content examined in a large sample of patients using online cognitive behaviour therapy.

    Science.gov (United States)

    Millings, Abigail; Carnelley, Katherine B

    2015-11-01

    Computerised cognitive behavioural therapy provides a unique opportunity to collect and analyse data regarding the idiosyncratic content of people's core beliefs about the self, others and the world. 'Beating the Blues' users recorded a core belief derived through the downward arrow technique. Core beliefs from 1813 mental health patients were coded into 10 categories. The most common were global self-evaluation, attachment, and competence. Women were more likely, and men were less likely (than chance), to provide an attachment-related core belief; and men were more likely, and women less likely, to provide a self-competence-related core belief. This may be linked to gender differences in sources of self-esteem. Those who were suffering from anxiety were more likely to provide power- and control-themed core beliefs and less likely to provide attachment core beliefs than chance. Finally, those who had thoughts of suicide in the preceding week reported less competence themed core beliefs and more global self-evaluation (e.g., 'I am useless') core beliefs than chance. Concurrent symptom level was not available. The sample was not nationally representative, and featured programme completers only. Men and women may focus on different core beliefs in the context of CBT. Those suffering anxiety may need a therapeutic focus on power and control. A complete rejection of the self (not just within one domain, such as competence) may be linked to thoughts of suicide. Future research should examine how individual differences and symptom severity influence core beliefs. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Methodology for Quantitative Analysis of Large Liquid Samples with Prompt Gamma Neutron Activation Analysis using Am-Be Source

    International Nuclear Information System (INIS)

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.

    2009-01-01

    An optimized set-up for prompt gamma neutron activation analysis (PGNAA) with Am-Be source is described and used for large liquid samples analysis. A methodology for quantitative analysis is proposed: it consists on normalizing the prompt gamma count rates with thermal neutron flux measurements carried out with He-3 detector and gamma attenuation factors calculated using MCNP-5. The relative and absolute methods are considered. This methodology is then applied to the determination of cadmium in industrial phosphoric acid. The same sample is then analyzed by inductively coupled plasma (ICP) method. Our results are in good agreement with those obtained with ICP method.

  20. Prevalence of overweight and obesity in a large clinical sample of children with autism.

    Science.gov (United States)

    Broder-Fingert, Sarabeth; Brazauskas, Karissa; Lindgren, Kristen; Iannuzzi, Dorothea; Van Cleave, Jeanne

    2014-01-01

    Overweight and obesity are major pediatric public health problems in the United States; however, limited data exist on the prevalence and correlates of overnutrition in children with autism. Through a large integrated health care system's patient database, we identified 6672 children ages 2 to 20 years with an assigned ICD-9 code of autism (299.0), Asperger syndrome (299.8), and control subjects from 2008 to 2011 who had at least 1 weight and height recorded in the same visit. We calculated age-adjusted, sex-adjusted body mass index and classified children as overweight (body mass index 85th to 95th percentile) or obese (≥ 95th percentile). We used multinomial logistic regression to compare the odds of overweight and obesity between groups. We then used logistic regression to evaluate factors associated with overweight and obesity in children with autism, including demographic and clinical characteristics. Compared to control subjects, children with autism and Asperger syndrome had significantly higher odds of overweight (odds ratio, 95% confidence interval: autism 2.24, 1.74-2.88; Asperger syndrome 1.49, 1.12-1.97) and obesity (autism 4.83, 3.85-6.06; Asperger syndrome 5.69, 4.50-7.21). Among children with autism, we found a higher odds of obesity in older children (aged 12-15 years 1.87, 1.33-2.63; aged 16-20 years 1.94, 1.39-2.71) compared to children aged 6 to 11 years. We also found higher odds of overweight and obesity in those with public insurance (overweight 1.54, 1.25-1.89; obese 1.16, 1.02-1.40) and with co-occurring sleep disorder (obese 1.23, 1.00-1.53). Children with autism and Asperger syndrome had significantly higher odds of overweight and obesity than control subjects. Older age, public insurance, and co-occurring sleep disorder were associated with overweight or obesity in this population. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  1. The Pemberton Happiness Index: Validation of the Universal Portuguese version in a large Brazilian sample.

    Science.gov (United States)

    Paiva, Bianca Sakamoto Ribeiro; de Camargos, Mayara Goulart; Demarzo, Marcelo Marcos Piva; Hervás, Gonzalo; Vázquez, Carmelo; Paiva, Carlos Eduardo

    2016-09-01

    The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys.An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons' Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test-retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a "happy individual" was defined using receiver-operating characteristic (ROC) curve methodology.Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test-retest (intraclass correlation coefficient = 0.814) were both considered

  2. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Science.gov (United States)

    Dufour, Nicholas; Redcay, Elizabeth; Young, Liane; Mavros, Penelope L; Moran, Joseph M; Triantafyllou, Christina; Gabrieli, John D E; Saxe, Rebecca

    2013-01-01

    Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462) individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31), using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  3. Similar brain activation during false belief tasks in a large sample of adults with and without autism.

    Directory of Open Access Journals (Sweden)

    Nicholas Dufour

    Full Text Available Reading about another person's beliefs engages 'Theory of Mind' processes and elicits highly reliable brain activation across individuals and experimental paradigms. Using functional magnetic resonance imaging, we examined activation during a story task designed to elicit Theory of Mind processing in a very large sample of neurotypical (N = 462 individuals, and a group of high-functioning individuals with autism spectrum disorders (N = 31, using both region-of-interest and whole-brain analyses. This large sample allowed us to investigate group differences in brain activation to Theory of Mind tasks with unusually high sensitivity. There were no differences between neurotypical participants and those diagnosed with autism spectrum disorder. These results imply that the social cognitive impairments typical of autism spectrum disorder can occur without measurable changes in the size, location or response magnitude of activity during explicit Theory of Mind tasks administered to adults.

  4. CORRELATION ANALYSIS OF A LARGE SAMPLE OF NARROW-LINE SEYFERT 1 GALAXIES: LINKING CENTRAL ENGINE AND HOST PROPERTIES

    International Nuclear Information System (INIS)

    Xu Dawei; Komossa, S.; Wang Jing; Yuan Weimin; Zhou Hongyan; Lu Honglin; Li Cheng; Grupe, Dirk

    2012-01-01

    We present a statistical study of a large, homogeneously analyzed sample of narrow-line Seyfert 1 (NLS1) galaxies, accompanied by a comparison sample of broad-line Seyfert 1 (BLS1) galaxies. Optical emission-line and continuum properties are subjected to correlation analyses, in order to identify the main drivers of the correlation space of active galactic nuclei (AGNs), and of NLS1 galaxies in particular. For the first time, we have established the density of the narrow-line region as a key parameter in Eigenvector 1 space, as important as the Eddington ratio L/L Edd . This is important because it links the properties of the central engine with the properties of the host galaxy, i.e., the interstellar medium (ISM). We also confirm previously found correlations involving the line width of Hβ and the strength of the Fe II and [O III] λ5007 emission lines, and we confirm the important role played by L/L Edd in driving the properties of NLS1 galaxies. A spatial correlation analysis shows that large-scale environments of the BLS1 and NLS1 galaxies of our sample are similar. If mergers are rare in our sample, accretion-driven winds, on the one hand, or bar-driven inflows, on the other hand, may account for the strong dependence of Eigenvector 1 on ISM density.

  5. Method for the radioimmunoassay of large numbers of samples using quantitative autoradiography of multiple-well plates

    International Nuclear Information System (INIS)

    Luner, S.J.

    1978-01-01

    A double antibody assay for thyroxine using 125 I as label was carried out on 10-μl samples in Microtiter V-plates. After an additional centrifugation to compact the precipitates the plates were placed in contact with x-ray film overnight and the spots were scanned. In the 20 to 160 ng/ml range the average coefficient of variation for thyroxine concentration determined on the basis of film spot optical density was 11 percent compared to 4.8 percent obtained using a standard gamma counter. Eliminating the need for each sample to spend on the order of 1 min in a crystal well detector makes the method convenient for large-scale applications involving more than 3000 samples per day

  6. Development and application of spatial and temporal statistical methods for unbiased wildlife sampling

    NARCIS (Netherlands)

    Khaemba, W.M.

    2000-01-01

    Current methods of obtaining information on wildlife populations are based on monitoring programmes using periodic surveys. In most cases aerial techniques are applied. Reported numbers are, however, often biased and imprecise, making it difficult to use this information for management

  7. A topological analysis of large-scale structure, studied using the CMASS sample of SDSS-III

    International Nuclear Information System (INIS)

    Parihar, Prachi; Gott, J. Richard III; Vogeley, Michael S.; Choi, Yun-Young; Kim, Juhan; Kim, Sungsoo S.; Speare, Robert; Brownstein, Joel R.; Brinkmann, J.

    2014-01-01

    We study the three-dimensional genus topology of large-scale structure using the northern region of the CMASS Data Release 10 (DR10) sample of the SDSS-III Baryon Oscillation Spectroscopic Survey. We select galaxies with redshift 0.452 < z < 0.625 and with a stellar mass M stellar > 10 11.56 M ☉ . We study the topology at two smoothing lengths: R G = 21 h –1 Mpc and R G = 34 h –1 Mpc. The genus topology studied at the R G = 21 h –1 Mpc scale results in the highest genus amplitude observed to date. The CMASS sample yields a genus curve that is characteristic of one produced by Gaussian random phase initial conditions. The data thus support the standard model of inflation where random quantum fluctuations in the early universe produced Gaussian random phase initial conditions. Modest deviations in the observed genus from random phase are as expected from shot noise effects and the nonlinear evolution of structure. We suggest the use of a fitting formula motivated by perturbation theory to characterize the shift and asymmetries in the observed genus curve with a single parameter. We construct 54 mock SDSS CMASS surveys along the past light cone from the Horizon Run 3 (HR3) N-body simulations, where gravitationally bound dark matter subhalos are identified as the sites of galaxy formation. We study the genus topology of the HR3 mock surveys with the same geometry and sampling density as the observational sample and find the observed genus topology to be consistent with ΛCDM as simulated by the HR3 mock samples. We conclude that the topology of the large-scale structure in the SDSS CMASS sample is consistent with cosmological models having primordial Gaussian density fluctuations growing in accordance with general relativity to form galaxies in massive dark matter halos.

  8. Unbiased total electron content (UTEC), their fluctuations, and correlation with seismic activity over Japan

    Science.gov (United States)

    Cornely, Pierre-Richard; Hughes, John

    2018-02-01

    Earthquakes are among the most dangerous events that occur on earth and many scientists have been investigating the underlying processes that take place before earthquakes occur. These investigations are fueling efforts towards developing both single and multiple parameter earthquake forecasting methods based on earthquake precursors. One potential earthquake precursor parameter that has received significant attention within the last few years is the ionospheric total electron content (TEC). Despite its growing popularity as an earthquake precursor, TEC has been under great scrutiny because of the underlying biases associated with the process of acquiring and processing TEC data. Future work in the field will need to demonstrate our ability to acquire TEC data with the least amount of biases possible thereby preserving the integrity of the data. This paper describes a process for removing biases using raw TEC data from the standard Rinex files obtained from any global positioning satellites system. The process is based on developing an unbiased TEC (UTEC) data and model that can be more adaptable to serving as a precursor signal for earthquake forecasting. The model was used during the days and hours leading to the earthquake off the coast of Tohoku, Japan on March 11, 2011 with interesting results. The model takes advantage of the large amount of data available from the GPS Earth Observation Network of Japan to display near real-time UTEC data as the earthquake approaches and for a period of time after the earthquake occurred.

  9. Unbiased estimators of coincidence and correlation in non-analogous Monte Carlo particle transport

    International Nuclear Information System (INIS)

    Szieberth, M.; Kloosterman, J.L.

    2014-01-01

    Highlights: • The history splitting method was developed for non-Boltzmann Monte Carlo estimators. • The method allows variance reduction for pulse-height and higher moment estimators. • It works in highly multiplicative problems but Russian roulette has to be replaced. • Estimation of higher moments allows the simulation of neutron noise measurements. • Biased sampling of fission helps the effective simulation of neutron noise methods. - Abstract: The conventional non-analogous Monte Carlo methods are optimized to preserve the mean value of the distributions. Therefore, they are not suited to non-Boltzmann problems such as the estimation of coincidences or correlations. This paper presents a general method called history splitting for the non-analogous estimation of such quantities. The basic principle of the method is that a non-analogous particle history can be interpreted as a collection of analogous histories with different weights according to the probability of their realization. Calculations with a simple Monte Carlo program for a pulse-height-type estimator prove that the method is feasible and provides unbiased estimation. Different variance reduction techniques have been tried with the method and Russian roulette turned out to be ineffective in high multiplicity systems. An alternative history control method is applied instead. Simulation results of an auto-correlation (Rossi-α) measurement show that even the reconstruction of the higher moments is possible with the history splitting method, which makes the simulation of neutron noise measurements feasible

  10. Sampling in schools and large institutional buildings: Implications for regulations, exposure and management of lead and copper.

    Science.gov (United States)

    Doré, Evelyne; Deshommes, Elise; Andrews, Robert C; Nour, Shokoufeh; Prévost, Michèle

    2018-04-21

    Legacy lead and copper components are ubiquitous in plumbing of large buildings including schools that serve children most vulnerable to lead exposure. Lead and copper samples must be collected after varying stagnation times and interpreted in reference to different thresholds. A total of 130 outlets (fountains, bathroom and kitchen taps) were sampled for dissolved and particulate lead as well as copper. Sampling was conducted at 8 schools and 3 institutional (non-residential) buildings served by municipal water of varying corrosivity, with and without corrosion control (CC), and without a lead service line. Samples included first draw following overnight stagnation (>8h), partial (30 s) and fully (5 min) flushed, and first draw after 30 min of stagnation. Total lead concentrations in first draw samples after overnight stagnation varied widely from 0.07 to 19.9 μg Pb/L (median: 1.7 μg Pb/L) for large buildings served with non-corrosive water. Higher concentrations were observed in schools with corrosive water without CC (0.9-201 μg Pb/L, median: 14.3 μg Pb/L), while levels in schools with CC ranged from 0.2 to 45.1 μg Pb/L (median: 2.1 μg Pb/L). Partial flushing (30 s) and full flushing (5 min) reduced concentrations by 88% and 92% respectively for corrosive waters without CC. Lead concentrations were 45% than values in 1st draw samples collected after overnight stagnation. Concentrations of particulate Pb varied widely (≥0.02-846 μg Pb/L) and was found to be the cause of very high total Pb concentrations in the 2% of samples exceeding 50 μg Pb/L. Pb levels across outlets within the same building varied widely (up to 1000X) especially in corrosive water (0.85-851 μg Pb/L after 30MS) confirming the need to sample at each outlet to identify high risk taps. Based on the much higher concentrations observed in first draw samples, even after a short stagnation, the first 250mL should be discarded unless no sources

  11. Soil Characterization by Large Scale Sampling of Soil Mixed with Buried Construction Debris at a Former Uranium Fuel Fabrication Facility

    International Nuclear Information System (INIS)

    Nardi, A.J.; Lamantia, L.

    2009-01-01

    Recent soil excavation activities on a site identified the presence of buried uranium contaminated building construction debris. The site previously was the location of a low enriched uranium fuel fabrication facility. This resulted in the collection of excavated materials from the two locations where contaminated subsurface debris was identified. The excavated material was temporarily stored in two piles on the site until a determination could be made as to the appropriate disposition of the material. Characterization of the excavated material was undertaken in a manner that involved the collection of large scale samples of the excavated material in 1 cubic meter Super Sacks. Twenty bags were filled with excavated material that consisted of the mixture of both the construction debris and the associated soil. In order to obtain information on the level of activity associated with the construction debris, ten additional bags were filled with construction debris that had been separated, to the extent possible, from the associated soil. Radiological surveys were conducted of the resulting bags of collected materials and the soil associated with the waste mixture. The 30 large samples, collected as bags, were counted using an In-Situ Object Counting System (ISOCS) unit to determine the average concentration of U-235 present in each bag. The soil fraction was sampled by the collection of 40 samples of soil for analysis in an on-site laboratory. A fraction of these samples were also sent to an off-site laboratory for additional analysis. This project provided the necessary soil characterization information to allow consideration of alternate options for disposition of the material. The identified contaminant was verified to be low enriched uranium. Concentrations of uranium in the waste were found to be lower than the calculated site specific derived concentration guideline levels (DCGLs) but higher than the NRC's screening values. The methods and results are presented

  12. SyPRID sampler: A large-volume, high-resolution, autonomous, deep-ocean precision plankton sampling system

    Science.gov (United States)

    Billings, Andrew; Kaiser, Carl; Young, Craig M.; Hiebert, Laurel S.; Cole, Eli; Wagner, Jamie K. S.; Van Dover, Cindy Lee

    2017-03-01

    The current standard for large-volume (thousands of cubic meters) zooplankton sampling in the deep sea is the MOCNESS, a system of multiple opening-closing nets, typically lowered to within 50 m of the seabed and towed obliquely to the surface to obtain low-spatial-resolution samples that integrate across 10 s of meters of water depth. The SyPRID (Sentry Precision Robotic Impeller Driven) sampler is an innovative, deep-rated (6000 m) plankton sampler that partners with the Sentry Autonomous Underwater Vehicle (AUV) to obtain paired, large-volume plankton samples at specified depths and survey lines to within 1.5 m of the seabed and with simultaneous collection of sensor data. SyPRID uses a perforated Ultra-High-Molecular-Weight (UHMW) plastic tube to support a fine mesh net within an outer carbon composite tube (tube-within-a-tube design), with an axial flow pump located aft of the capture filter. The pump facilitates flow through the system and reduces or possibly eliminates the bow wave at the mouth opening. The cod end, a hollow truncated cone, is also made of UHMW plastic and includes a collection volume designed to provide an area where zooplankton can collect, out of the high flow region. SyPRID attaches as a saddle-pack to the Sentry vehicle. Sentry itself is configured with a flight control system that enables autonomous survey paths to low altitudes. In its verification deployment at the Blake Ridge Seep (2160 m) on the US Atlantic Margin, SyPRID was operated for 6 h at an altitude of 5 m. It recovered plankton samples, including delicate living larvae, from the near-bottom stratum that is seldom sampled by a typical MOCNESS tow. The prototype SyPRID and its next generations will enable studies of plankton or other particulate distributions associated with localized physico-chemical strata in the water column or above patchy habitats on the seafloor.

  13. An examination of the RCMAS-2 scores across gender, ethnic background, and age in a large Asian school sample.

    Science.gov (United States)

    Ang, Rebecca P; Lowe, Patricia A; Yusof, Noradlin

    2011-12-01

    The present study investigated the factor structure, reliability, convergent and discriminant validity, and U.S. norms of the Revised Children's Manifest Anxiety Scale, Second Edition (RCMAS-2; C. R. Reynolds & B. O. Richmond, 2008a) scores in a Singapore sample of 1,618 school-age children and adolescents. Although there were small statistically significant differences in the average RCMAS-2 T scores found across various demographic groupings, on the whole, the U.S. norms appear adequate for use in the Asian Singapore sample. Results from item bias analyses suggested that biased items detected had small effects and were counterbalanced across gender and ethnicity, and hence, their relative impact on test score variation appears to be minimal. Results of factor analyses on the RCMAS-2 scores supported the presence of a large general anxiety factor, the Total Anxiety factor, and the 5-factor structure found in U.S. samples was replicated. Both the large general anxiety factor and the 5-factor solution were invariant across gender and ethnic background. Internal consistency estimates ranged from adequate to good, and 2-week test-retest reliability estimates were comparable to previous studies. Evidence providing support for convergent and discriminant validity of the RCMAS-2 scores was also found. Taken together, findings provide additional cross-cultural evidence of the appropriateness and usefulness of the RCMAS-2 as a measure of anxiety in Asian Singaporean school-age children and adolescents.

  14. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    International Nuclear Information System (INIS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-01-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)

  15. 99Mo Yield Using Large Sample Mass of MoO3 for Sustainable Production of 99Mo

    Science.gov (United States)

    Tsukada, Kazuaki; Nagai, Yasuki; Hashimoto, Kazuyuki; Kawabata, Masako; Minato, Futoshi; Saeki, Hideya; Motoishi, Shoji; Itoh, Masatoshi

    2018-04-01

    A neutron source from the C(d,n) reaction has the unique capability of producing medical radioisotopes such as 99Mo with a minimum level of radioactive waste. Precise data on the neutron flux are crucial to determine the best conditions for obtaining the maximum yield of 99Mo. The measured yield of 99Mo produced by the 100Mo(n,2n)99Mo reaction from a large sample mass of MoO3 agrees well with the numerical result estimated with the latest neutron data, which are a factor of two larger than the other existing data. This result establishes an important finding for the domestic production of 99Mo: approximately 50% of the demand for 99Mo in Japan could be met using a 100 g 100MoO3 sample mass with a single accelerator of 40 MeV, 2 mA deuteron beams.

  16. Product-selective blot: a technique for measuring enzyme activities in large numbers of samples and in native electrophoresis gels

    International Nuclear Information System (INIS)

    Thompson, G.A.; Davies, H.M.; McDonald, N.

    1985-01-01

    A method termed product-selective blotting has been developed for screening large numbers of samples for enzyme activity. The technique is particularly well suited to detection of enzymes in native electrophoresis gels. The principle of the method was demonstrated by blotting samples from glutaminase or glutamate synthase reactions into an agarose gel embedded with ion-exchange resin under conditions favoring binding of product (glutamate) over substrates and other substances in the reaction mixture. After washes to remove these unbound substances, the product was measured using either fluorometric staining or radiometric techniques. Glutaminase activity in native electrophoresis gels was visualized by a related procedure in which substrates and products from reactions run in the electrophoresis gel were blotted directly into a resin-containing image gel. Considering the selective-binding materials available for use in the image gel, along with the possible detection systems, this method has potentially broad application

  17. Neutron activation analysis of archaeological artifacts using the conventional relative method: a realistic approach for analysis of large samples

    International Nuclear Information System (INIS)

    Bedregal, P.S.; Mendoza, A.; Montoya, E.H.; Cohen, I.M.; Universidad Tecnologica Nacional, Buenos Aires; Oscar Baltuano

    2012-01-01

    A new approach for analysis of entire potsherds of archaeological interest by INAA, using the conventional relative method, is described. The analytical method proposed involves, primarily, the preparation of replicates of the original archaeological pottery, with well known chemical composition (standard), destined to be irradiated simultaneously, in a well thermalized external neutron beam of the RP-10 reactor, with the original object (sample). The basic advantage of this proposal is to avoid the need of performing complicated effect corrections when dealing with large samples, due to neutron self shielding, neutron self-thermalization and gamma ray attenuation. In addition, and in contrast with the other methods, the main advantages are the possibility of evaluating the uncertainty of the results and, fundamentally, validating the overall methodology. (author)

  18. Oxalic acid as a liquid dosimeter for absorbed dose measurement in large-scale of sample solution

    International Nuclear Information System (INIS)

    Biramontri, S.; Dechburam, S.; Vitittheeranon, A.; Wanitsuksombut, W.; Thongmitr, W.

    1999-01-01

    This study shows the feasibility for, applying 2.5 mM aqueous oxalic acid solution using spectrophotometric analysis method for absorbed dose measurement from 1 to 10 kGy in a large-scale of sample solution. The optimum wavelength of 220 nm was selected. The stability of the response of the dosimeter over 25 days was better than 1 % for unirradiated and ± 2% for irradiated solution. The reproducibility in the same batch was within 1%. The variation of the dosimeter response between batches was also studied. (author)

  19. Increased body mass index predicts severity of asthma symptoms but not objective asthma traits in a large sample of asthmatics

    DEFF Research Database (Denmark)

    Bildstrup, Line; Backer, Vibeke; Thomsen, Simon Francis

    2015-01-01

    AIM: To examine the relationship between body mass index (BMI) and different indicators of asthma severity in a large community-based sample of Danish adolescents and adults. METHODS: A total of 1186 subjects, 14-44 years of age, who in a screening questionnaire had reported a history of airway...... symptoms suggestive of asthma and/or allergy, or who were taking any medication for these conditions were clinically examined. All participants were interviewed about respiratory symptoms and furthermore height and weight, skin test reactivity, lung function, and airway responsiveness were measured...

  20. Solid-Phase Extraction and Large-Volume Sample Stacking-Capillary Electrophoresis for Determination of Tetracycline Residues in Milk

    Directory of Open Access Journals (Sweden)

    Gabriela Islas

    2018-01-01

    Full Text Available Solid-phase extraction in combination with large-volume sample stacking-capillary electrophoresis (SPE-LVSS-CE was applied to measure chlortetracycline, doxycycline, oxytetracycline, and tetracycline in milk samples. Under optimal conditions, the proposed method had a linear range of 29 to 200 µg·L−1, with limits of detection ranging from 18.6 to 23.8 µg·L−1 with inter- and intraday repeatabilities < 10% (as a relative standard deviation in all cases. The enrichment factors obtained were from 50.33 to 70.85 for all the TCs compared with a conventional capillary zone electrophoresis (CZE. This method is adequate to analyze tetracyclines below the most restrictive established maximum residue limits. The proposed method was employed in the analysis of 15 milk samples from different brands. Two of the tested samples were positive for the presence of oxytetracycline with concentrations of 95 and 126 µg·L−1. SPE-LVSS-CE is a robust, easy, and efficient strategy for online preconcentration of tetracycline residues in complex matrices.

  1. Characterizing the zenithal night sky brightness in large territories: how many samples per square kilometre are needed?

    Science.gov (United States)

    Bará, Salvador

    2018-01-01

    A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.

  2. Analysis of plant hormones by microemulsion electrokinetic capillary chromatography coupled with on-line large volume sample stacking.

    Science.gov (United States)

    Chen, Zongbao; Lin, Zian; Zhang, Lin; Cai, Yan; Zhang, Lan

    2012-04-07

    A novel method of microemulsion electrokinetic capillary chromatography (MEEKC) coupled with on-line large volume sample stacking was developed for the analysis of six plant hormones including indole-3-acetic acid, indole-3-butyric acid, indole-3-propionic acid, 1-naphthaleneacetic acid, abscisic acid and salicylic acid. Baseline separation of six plant hormones was achieved within 10 min by using the microemulsion background electrolyte containing a 97.2% (w/w) 10 mM borate buffer at pH 9.2, 1.0% (w/w) ethyl acetate as oil droplets, 0.6% (w/w) sodium dodecyl sulphate as surfactant and 1.2% (w/w) 1-butanol as cosurfactant. In addition, an on-line concentration method based on a large volume sample stacking technique and multiple wavelength detection was adopted for improving the detection sensitivity in order to determine trace level hormones in a real sample. The optimal method provided about 50-100 fold increase in detection sensitivity compared with a single MEEKC method, and the detection limits (S/N = 3) were between 0.005 and 0.02 μg mL(-1). The proposed method was simple, rapid and sensitive and could be applied to the determination of six plant hormones in spiked water samples, tobacco leaves and 1-naphthylacetic acid in leaf fertilizer. The recoveries ranged from 76.0% to 119.1%, and good reproducibilities were obtained with relative standard deviations (RSDs) less than 6.6%.

  3. A large sample of Kohonen selected E+A (post-starburst) galaxies from the Sloan Digital Sky Survey

    Science.gov (United States)

    Meusinger, H.; Brünecke, J.; Schalldach, P.; in der Au, A.

    2017-01-01

    Context. The galaxy population in the contemporary Universe is characterised by a clear bimodality, blue galaxies with significant ongoing star formation and red galaxies with only a little. The migration between the blue and the red cloud of galaxies is an issue of active research. Post starburst (PSB) galaxies are thought to be observed in the short-lived transition phase. Aims: We aim to create a large sample of local PSB galaxies from the Sloan Digital Sky Survey (SDSS) to study their characteristic properties, particularly morphological features indicative of gravitational distortions and indications for active galactic nuclei (AGNs). Another aim is to present a tool set for an efficient search in a large database of SDSS spectra based on Kohonen self-organising maps (SOMs). Methods: We computed a huge Kohonen SOM for ∼106 spectra from SDSS data release 7. The SOM is made fully available, in combination with an interactive user interface, for the astronomical community. We selected a large sample of PSB galaxies taking advantage of the clustering behaviour of the SOM. The morphologies of both PSB galaxies and randomly selected galaxies from a comparison sample in SDSS Stripe 82 (S82) were inspected on deep co-added SDSS images to search for indications of gravitational distortions. We used the Portsmouth galaxy property computations to study the evolutionary stage of the PSB galaxies and archival multi-wavelength data to search for hidden AGNs. Results: We compiled a catalogue of 2665 PSB galaxies with redshifts z 3 Å and z cloud, in agreement with the idea that PSB galaxies represent the transitioning phase between actively and passively evolving galaxies. The relative frequency of distorted PSB galaxies is at least 57% for EW(Hδ) > 5 Å, significantly higher than in the comparison sample. The search for AGNs based on conventional selection criteria in the radio and MIR results in a low AGN fraction of ∼2-3%. We confirm an MIR excess in the mean SED of

  4. Direct metagenomic detection of viral pathogens in nasal and fecal specimens using an unbiased high-throughput sequencing approach.

    Directory of Open Access Journals (Sweden)

    Shota Nakamura

    Full Text Available With the severe acute respiratory syndrome epidemic of 2003 and renewed attention on avian influenza viral pandemics, new surveillance systems are needed for the earlier detection of emerging infectious diseases. We applied a "next-generation" parallel sequencing platform for viral detection in nasopharyngeal and fecal samples collected during seasonal influenza virus (Flu infections and norovirus outbreaks from 2005 to 2007 in Osaka, Japan. Random RT-PCR was performed to amplify RNA extracted from 0.1-0.25 ml of nasopharyngeal aspirates (N = 3 and fecal specimens (N = 5, and more than 10 microg of cDNA was synthesized. Unbiased high-throughput sequencing of these 8 samples yielded 15,298-32,335 (average 24,738 reads in a single 7.5 h run. In nasopharyngeal samples, although whole genome analysis was not available because the majority (>90% of reads were host genome-derived, 20-460 Flu-reads were detected, which was sufficient for subtype identification. In fecal samples, bacteria and host cells were removed by centrifugation, resulting in gain of 484-15,260 reads of norovirus sequence (78-98% of the whole genome was covered, except for one specimen that was under-detectable by RT-PCR. These results suggest that our unbiased high-throughput sequencing approach is useful for directly detecting pathogenic viruses without advance genetic information. Although its cost and technological availability make it unlikely that this system will very soon be the diagnostic standard worldwide, this system could be useful for the earlier discovery of novel emerging viruses and bioterrorism, which are difficult to detect with conventional procedures.

  5. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  6. Lack of association between digit ratio (2D:4D) and assertiveness: replication in a large sample.

    Science.gov (United States)

    Voracek, Martin

    2009-12-01

    Findings regarding within-sex associations of digit ratio (2D:4D), a putative pointer to long-lasting effects of prenatal androgen action, and sexually differentiated personality traits have generally been inconsistent or unreplicable, suggesting that effects in this domain, if any, are likely small. In contrast to evidence from Wilson's important 1983 study, a forerunner of modern 2D:4D research, two recent studies in 2005 and 2008 by Freeman, et al. and Hampson, et al. showed assertiveness, a presumably male-typed personality trait, was not associated with 2D:4D; however, these studies were clearly statistically underpowered. Hence this study examined this question anew, based on a large sample of 491 men and 627 women. Assertiveness was only modestly sexually differentiated, favoring men, and a positive correlate of age and education and a negative correlate of weight and Body Mass Index among women, but not men. Replicating the two prior studies, 2D:4D was throughout unrelated to assertiveness scores. This null finding was preserved with controls for correlates of assertiveness, also in nonparametric analysis and with tests for curvilinear relations. Discussed are implications of this specific null finding, now replicated in a large sample, for studies of 2D:4D and personality in general and novel research approaches to proceed in this field.

  7. Diversity in the stellar velocity dispersion profiles of a large sample of brightest cluster galaxies z ≤ 0.3

    Science.gov (United States)

    Loubser, S. I.; Hoekstra, H.; Babul, A.; O'Sullivan, E.

    2018-06-01

    We analyse spatially resolved deep optical spectroscopy of brightestcluster galaxies (BCGs) located in 32 massive clusters with redshifts of 0.05 ≤ z ≤ 0.30 to investigate their velocity dispersion profiles. We compare these measurements to those of other massive early-type galaxies, as well as central group galaxies, where relevant. This unique, large sample extends to the most extreme of massive galaxies, spanning MK between -25.7 and -27.8 mag, and host cluster halo mass M500 up to 1.7 × 1015 M⊙. To compare the kinematic properties between brightest group and cluster members, we analyse similar spatially resolved long-slit spectroscopy for 23 nearby brightest group galaxies (BGGs) from the Complete Local-Volume Groups Sample. We find a surprisingly large variety in velocity dispersion slopes for BCGs, with a significantly larger fraction of positive slopes, unique compared to other (non-central) early-type galaxies as well as the majority of the brightest members of the groups. We find that the velocity dispersion slopes of the BCGs and BGGs correlate with the luminosity of the galaxies, and we quantify this correlation. It is not clear whether the full diversity in velocity dispersion slopes that we see is reproduced in simulations.

  8. Gasoline prices, gasoline consumption, and new-vehicle fuel economy: Evidence for a large sample of countries

    International Nuclear Information System (INIS)

    Burke, Paul J.; Nishitateno, Shuhei

    2013-01-01

    Countries differ considerably in terms of the price drivers pay for gasoline. This paper uses data for 132 countries for the period 1995–2008 to investigate the implications of these differences for the consumption of gasoline for road transport. To address the potential for simultaneity bias, we use both a country's oil reserves and the international crude oil price as instruments for a country's average gasoline pump price. We obtain estimates of the long-run price elasticity of gasoline demand of between − 0.2 and − 0.5. Using newly available data for a sub-sample of 43 countries, we also find that higher gasoline prices induce consumers to substitute to vehicles that are more fuel-efficient, with an estimated elasticity of + 0.2. Despite the small size of our elasticity estimates, there is considerable scope for low-price countries to achieve gasoline savings and vehicle fuel economy improvements via reducing gasoline subsidies and/or increasing gasoline taxes. - Highlights: ► We estimate the determinants of gasoline demand and new-vehicle fuel economy. ► Estimates are for a large sample of countries for the period 1995–2008. ► We instrument for gasoline prices using oil reserves and the world crude oil price. ► Gasoline demand and fuel economy are inelastic with respect to the gasoline price. ► Large energy efficiency gains are possible via higher gasoline prices

  9. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  10. High-throughput genotyping assay for the large-scale genetic characterization of Cryptosporidium parasites from human and bovine samples.

    Science.gov (United States)

    Abal-Fabeiro, J L; Maside, X; Llovo, J; Bello, X; Torres, M; Treviño, M; Moldes, L; Muñoz, A; Carracedo, A; Bartolomé, C

    2014-04-01

    The epidemiological study of human cryptosporidiosis requires the characterization of species and subtypes involved in human disease in large sample collections. Molecular genotyping is costly and time-consuming, making the implementation of low-cost, highly efficient technologies increasingly necessary. Here, we designed a protocol based on MALDI-TOF mass spectrometry for the high-throughput genotyping of a panel of 55 single nucleotide variants (SNVs) selected as markers for the identification of common gp60 subtypes of four Cryptosporidium species that infect humans. The method was applied to a panel of 608 human and 63 bovine isolates and the results were compared with control samples typed by Sanger sequencing. The method allowed the identification of species in 610 specimens (90·9%) and gp60 subtype in 605 (90·2%). It displayed excellent performance, with sensitivity and specificity values of 87·3 and 98·0%, respectively. Up to nine genotypes from four different Cryptosporidium species (C. hominis, C. parvum, C. meleagridis and C. felis) were detected in humans; the most common ones were C. hominis subtype Ib, and C. parvum IIa (61·3 and 28·3%, respectively). 96·5% of the bovine samples were typed as IIa. The method performs as well as the widely used Sanger sequencing and is more cost-effective and less time consuming.

  11. Sleep habits, insomnia, and daytime sleepiness in a large and healthy community-based sample of New Zealanders.

    Science.gov (United States)

    Wilsmore, Bradley R; Grunstein, Ronald R; Fransen, Marlene; Woodward, Mark; Norton, Robyn; Ameratunga, Shanthi

    2013-06-15

    To determine the relationship between sleep complaints, primary insomnia, excessive daytime sleepiness, and lifestyle factors in a large community-based sample. Cross-sectional study. Blood donor sites in New Zealand. 22,389 individuals aged 16-84 years volunteering to donate blood. N/A. A comprehensive self-administered questionnaire including personal demographics and validated questions assessing sleep disorders (snoring, apnea), sleep complaints (sleep quantity, sleep dissatisfaction), insomnia symptoms, excessive daytime sleepiness, mood, and lifestyle factors such as work patterns, smoking, alcohol, and illicit substance use. Additionally, direct measurements of height and weight were obtained. One in three participants report healthy sample) was associated with insomnia (odds ratio [OR] 1.75, 95% confidence interval [CI] 1.50 to 2.05), depression (OR 2.01, CI 1.74 to 2.32), and sleep disordered breathing (OR 1.92, CI 1.59 to 2.32). Long work hours, alcohol dependence, and rotating work shifts also increase the risk of daytime sleepiness. Even in this relatively young, healthy, non-clinical sample, sleep complaints and primary insomnia with subsequent excess daytime sleepiness were common. There were clear associations between many personal and lifestyle factors-such as depression, long work hours, alcohol dependence, and rotating shift work-and sleep problems or excessive daytime sleepiness.

  12. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  13. Hierarchical thinking in network biology: the unbiased modularization of biochemical networks.

    Science.gov (United States)

    Papin, Jason A; Reed, Jennifer L; Palsson, Bernhard O

    2004-12-01

    As reconstructed biochemical reaction networks continue to grow in size and scope, there is a growing need to describe the functional modules within them. Such modules facilitate the study of biological processes by deconstructing complex biological networks into conceptually simple entities. The definition of network modules is often based on intuitive reasoning. As an alternative, methods are being developed for defining biochemical network modules in an unbiased fashion. These unbiased network modules are mathematically derived from the structure of the whole network under consideration.

  14. Comparison of blood RNA isolation methods from samples stabilized in Tempus tubes and stored at a large human biobank.

    Science.gov (United States)

    Aarem, Jeanette; Brunborg, Gunnar; Aas, Kaja K; Harbak, Kari; Taipale, Miia M; Magnus, Per; Knudsen, Gun Peggy; Duale, Nur

    2016-09-01

    More than 50,000 adult and cord blood samples were collected in Tempus tubes and stored at the Norwegian Institute of Public Health Biobank for future use. In this study, we systematically evaluated and compared five blood-RNA isolation protocols: three blood-RNA isolation protocols optimized for simultaneous isolation of all blood-RNA species (MagMAX RNA Isolation Kit, both manual and semi-automated protocols; and Norgen Preserved Blood RNA kit I); and two protocols optimized for large RNAs only (Tempus Spin RNA, and Tempus 6-port isolation kit). We estimated the following parameters: RNA quality, RNA yield, processing time, cost per sample, and RNA transcript stability of six selected mRNAs and 13 miRNAs using real-time qPCR. Whole blood samples from adults (n = 59 tubes) and umbilical cord blood (n = 18 tubes) samples collected in Tempus tubes were analyzed. High-quality blood-RNAs with average RIN-values above seven were extracted using all five RNA isolation protocols. The transcript levels of the six selected genes showed minimal variation between the five protocols. Unexplained differences within the transcript levels of the 13 miRNA were observed; however, the 13 miRNAs had similar expression direction and they were within the same order of magnitude. Some differences in the RNA processing time and cost were noted. Sufficient amounts of high-quality RNA were obtained using all five protocols, and the Tempus blood RNA system therefore seems not to be dependent on one specific RNA isolation method.

  15. Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey

    Directory of Open Access Journals (Sweden)

    Steven R. Corman

    2013-12-01

    Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.

  16. Spearman's "law of diminishing returns" and the role of test reliability investigated in a large sample of Danish military draftees

    DEFF Research Database (Denmark)

    Teasdale, Thomas William; Hartmann, P.

    2005-01-01

    The present article investigates Spearman's "Law of Diminishing Returns" (SLODR), which hypothesizes that the g saturation for cognitive tests is lower for high ability subjects than for low ability subjects. This hypothesis was tested in a large sample of Danish military draftees (N = 6757) who...... were representative of the young adult male population, aged 18-19, and tested with a group-administered intelligence test comprised of four subtests. The aim of the study was twofold. The first was to reproduce previous SLODR findings by the present authors. This was done by replicating...... in reliability could account for the difference in g saturation across ability groups. The results showed that the reliability was larger for the High ability group, thereby not explaining the present findings....

  17. EFFECTS OF LONG-TERM ALENDRONATE TREATMENT ON A LARGE SAMPLE OF PEDIATRIC PATIENTS WITH OSTEOGENESIS IMPERFECTA.

    Science.gov (United States)

    Lv, Fang; Liu, Yi; Xu, Xiaojie; Wang, Jianyi; Ma, Doudou; Jiang, Yan; Wang, Ou; Xia, Weibo; Xing, Xiaoping; Yu, Wei; Li, Mei

    2016-12-01

    Osteogenesis imperfecta (OI) is a group of inherited diseases characterized by reduced bone mass, recurrent bone fractures, and progressive bone deformities. Here, we evaluate the efficacy and safety of long-term treatment with alendronate in a large sample of Chinese children and adolescents with OI. In this prospective study, a total of 91 children and adolescents with OI were included. The patients received 3 years' treatment with 70 mg alendronate weekly and 500 mg calcium daily. During the treatment, fracture incidence, bone mineral density (BMD), and serum levels of the bone turnover biomarkers (alkaline phosphatase [ALP] and cross-linked C-telopeptide of type I collagen [β-CTX]) were evaluated. Linear growth speed and parameters of safety were also measured. After 3 years of treatment, the mean annual fracture incidence decreased from 1.2 ± 0.8 to 0.2 ± 0.3 (Posteogenesis imperfecta PTH = parathyroid hormone.

  18. Unbiased identification of patients with disorders of sex development.

    Directory of Open Access Journals (Sweden)

    David A Hanauer

    Full Text Available Disorders of sex development (DSD represent a collection of rare diseases that generate substantial controversy regarding best practices for diagnosis and treatment. A significant barrier preventing a better understanding of how patients with these conditions should be evaluated and treated, especially from a psychological standpoint, is the lack of systematic and standardized approaches to identify cases for study inclusion. Common approaches include "hand-picked" subjects already known to the practice, which could introduce bias. We implemented an informatics-based approach to identify patients with DSD from electronic health records (EHRs at three large, academic children's hospitals. The informatics approach involved comprehensively searching EHRs at each hospital using a combination of structured billing codes as an initial filtering strategy followed by keywords applied to the free text clinical documentation. The informatics approach was implemented to replicate the functionality of an EHR search engine (EMERSE available at one of the hospitals. At the two hospitals that did not have EMERSE, we compared case ascertainment using the informatics method to traditional approaches employed for identifying subjects. Potential cases identified using all approaches were manually reviewed by experts in DSD to verify eligibility criteria. At the two institutions where both the informatics and traditional approaches were applied, the informatics approach identified substantially higher numbers of potential study subjects. The traditional approaches yielded 14 and 28 patients with DSD, respectively; the informatics approach yielded 226 and 77 patients, respectively. The informatics approach missed only a few cases that the traditional approaches identified, largely because those cases were known to the study team, but patient data were not in the particular children's hospital EHR. The use of informatics approaches to search electronic documentation

  19. Next generation sensing platforms for extended deployments in large-scale, multidisciplinary, adaptive sampling and observational networks

    Science.gov (United States)

    Cross, J. N.; Meinig, C.; Mordy, C. W.; Lawrence-Slavas, N.; Cokelet, E. D.; Jenkins, R.; Tabisola, H. M.; Stabeno, P. J.

    2016-12-01

    New autonomous sensors have dramatically increased the resolution and accuracy of oceanographic data collection, enabling rapid sampling over extremely fine scales. Innovative new autonomous platofrms like floats, gliders, drones, and crawling moorings leverage the full potential of these new sensors by extending spatiotemporal reach across varied environments. During 2015 and 2016, The Innovative Technology for Arctic Exploration Program at the Pacific Marine Environmental Laboratory tested several new types of fully autonomous platforms with increased speed, durability, and power and payload capacity designed to deliver cutting-edge ecosystem assessment sensors to remote or inaccessible environments. The Expendable Ice-Tracking (EXIT) gloat developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) is moored near bottom during the ice-free season and released on an autonomous timer beneath the ice during the following winter. The float collects a rapid profile during ascent, and continues to collect critical, poorly-accessible under-ice data until melt, when data is transmitted via satellite. The autonomous Oculus sub-surface glider developed by the University of Washington and PMEL has a large power and payload capacity and an enhanced buoyancy engine. This 'coastal truck' is designed for the rapid water column ascent required by optical imaging systems. The Saildrone is a solar and wind powered ocean unmanned surface vessel (USV) developed by Saildrone, Inc. in partnership with PMEL. This large-payload (200 lbs), fast (1-7 kts), durable (46 kts winds) platform was equipped with 15 sensors designed for ecosystem assessment during 2016, including passive and active acoustic systems specially redesigned for autonomous vehicle deployments. The senors deployed on these platforms achieved rigorous accuracy and precision standards. These innovative platforms provide new sampling capabilities and cost efficiencies in high-resolution sensor deployment

  20. Examining gray matter structure associated with academic performance in a large sample of Chinese high school students.

    Science.gov (United States)

    Wang, Song; Zhou, Ming; Chen, Taolin; Yang, Xun; Chen, Guangxiang; Wang, Meiyun; Gong, Qiyong

    2017-04-18

    Achievement in school is crucial for students to be able to pursue successful careers and lead happy lives in the future. Although many psychological attributes have been found to be associated with academic performance, the neural substrates of academic performance remain largely unknown. Here, we investigated the relationship between brain structure and academic performance in a large sample of high school students via structural magnetic resonance imaging (S-MRI) using voxel-based morphometry (VBM) approach. The whole-brain regression analyses showed that higher academic performance was related to greater regional gray matter density (rGMD) of the left dorsolateral prefrontal cortex (DLPFC), which is considered a neural center at the intersection of cognitive and non-cognitive functions. Furthermore, mediation analyses suggested that general intelligence partially mediated the impact of the left DLPFC density on academic performance. These results persisted even after adjusting for the effect of family socioeconomic status (SES). In short, our findings reveal a potential neuroanatomical marker for academic performance and highlight the role of general intelligence in explaining the relationship between brain structure and academic performance.

  1. Examining the interrater reliability of the Hare Psychopathy Checklist-Revised across a large sample of trained raters.

    Science.gov (United States)

    Blais, Julie; Forth, Adelle E; Hare, Robert D

    2017-06-01

    The goal of the current study was to assess the interrater reliability of the Psychopathy Checklist-Revised (PCL-R) among a large sample of trained raters (N = 280). All raters completed PCL-R training at some point between 1989 and 2012 and subsequently provided complete coding for the same 6 practice cases. Overall, 3 major conclusions can be drawn from the results: (a) reliability of individual PCL-R items largely fell below any appropriate standards while the estimates for Total PCL-R scores and factor scores were good (but not excellent); (b) the cases representing individuals with high psychopathy scores showed better reliability than did the cases of individuals in the moderate to low PCL-R score range; and (c) there was a high degree of variability among raters; however, rater specific differences had no consistent effect on scoring the PCL-R. Therefore, despite low reliability estimates for individual items, Total scores and factor scores can be reliably scored among trained raters. We temper these conclusions by noting that scoring standardized videotaped case studies does not allow the rater to interact directly with the offender. Real-world PCL-R assessments typically involve a face-to-face interview and much more extensive collateral information. We offer recommendations for new web-based training procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Statistical process control charts for attribute data involving very large sample sizes: a review of problems and solutions.

    Science.gov (United States)

    Mohammed, Mohammed A; Panesar, Jagdeep S; Laney, David B; Wilson, Richard

    2013-04-01

    The use of statistical process control (SPC) charts in healthcare is increasing. The primary purpose of SPC is to distinguish between common-cause variation which is attributable to the underlying process, and special-cause variation which is extrinsic to the underlying process. This is important because improvement under common-cause variation requires action on the process, whereas special-cause variation merits an investigation to first find the cause. Nonetheless, when dealing with attribute or count data (eg, number of emergency admissions) involving very large sample sizes, traditional SPC charts often produce tight control limits with most of the data points appearing outside the control limits. This can give a false impression of common and special-cause variation, and potentially misguide the user into taking the wrong actions. Given the growing availability of large datasets from routinely collected databases in healthcare, there is a need to present a review of this problem (which arises because traditional attribute charts only consider within-subgroup variation) and its solutions (which consider within and between-subgroup variation), which involve the use of the well-established measurements chart and the more recently developed attribute charts based on Laney's innovative approach. We close by making some suggestions for practice.

  3. A direct observation method for auditing large urban centers using stratified sampling, mobile GIS technology and virtual environments.

    Science.gov (United States)

    Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth

    2017-02-16

    With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.

  4. Reliability and statistical power analysis of cortical and subcortical FreeSurfer metrics in a large sample of healthy elderly.

    Science.gov (United States)

    Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz

    2015-03-01

    FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. The Depression Anxiety Stress Scales (DASS): normative data and latent structure in a large non-clinical sample.

    Science.gov (United States)

    Crawford, John R; Henry, Julie D

    2003-06-01

    To provide UK normative data for the Depression Anxiety and Stress Scale (DASS) and test its convergent, discriminant and construct validity. Cross-sectional, correlational and confirmatory factor analysis (CFA). The DASS was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,771) in terms of demographic variables. Competing models of the latent structure of the DASS were derived from theoretical and empirical sources and evaluated using confirmatory factor analysis. Correlational analysis was used to determine the influence of demographic variables on DASS scores. The convergent and discriminant validity of the measure was examined through correlating the measure with two other measures of depression and anxiety (the HADS and the sAD), and a measure of positive and negative affectivity (the PANAS). The best fitting model (CFI =.93) of the latent structure of the DASS consisted of three correlated factors corresponding to the depression, anxiety and stress scales with correlated error permitted between items comprising the DASS subscales. Demographic variables had only very modest influences on DASS scores. The reliability of the DASS was excellent, and the measure possessed adequate convergent and discriminant validity Conclusions: The DASS is a reliable and valid measure of the constructs it was intended to assess. The utility of this measure for UK clinicians is enhanced by the provision of large sample normative data.

  6. Post-traumatic stress syndrome in a large sample of older adults: determinants and quality of life.

    Science.gov (United States)

    Lamoureux-Lamarche, Catherine; Vasiliadis, Helen-Maria; Préville, Michel; Berbiche, Djamal

    2016-01-01

    The aims of this study are to assess in a sample of older adults consulting in primary care practices the determinants and quality of life associated with post-traumatic stress syndrome (PTSS). Data used came from a large sample of 1765 community-dwelling older adults who were waiting to receive health services in primary care clinics in the province of Quebec. PTSS was measured with the PTSS scale. Socio-demographic and clinical characteristics were used as potential determinants of PTSS. Quality of life was measured with the EuroQol-5D-3L (EQ-5D-3L) EQ-Visual Analog Scale and the Satisfaction With Your Life Scale. Multivariate logistic and linear regression models were used to study the presence of PTSS and different measures of health-related quality of life and quality of life as a function of study variables. The six-month prevalence of PTSS was 11.0%. PTSS was associated with age, marital status, number of chronic disorders and the presence of an anxiety disorder. PTSS was also associated with the EQ-5D-3L and the Satisfaction with Your Life Scale. PTSS is prevalent in patients consulting in primary care practices. Primary care physicians should be aware that PTSS is also associated with a decrease in quality of life, which can further negatively impact health status.

  7. Thinking about dying and trying and intending to die: results on suicidal behavior from a large Web-based sample.

    Science.gov (United States)

    de Araújo, Rafael M F; Mazzochi, Leonardo; Lara, Diogo R; Ottoni, Gustavo L

    2015-03-01

    Suicide is an important worldwide public health problem. The aim of this study was to characterize risk factors of suicidal behavior using a large Web-based sample. The data were collected by the Brazilian Internet Study on Temperament and Psychopathology (BRAINSTEP) from November 2010 to July 2011. Suicidal behavior was assessed by an instrument based on the Suicidal Behaviors Questionnaire. The final sample consisted of 48,569 volunteers (25.9% men) with a mean ± SD age of 30.7 ± 10.1 years. More than 60% of the sample reported having had at least a passing thought of killing themselves, and 6.8% of subjects had previously attempted suicide (64% unplanned). The demographic features with the highest risk of attempting suicide were female gender (OR = 1.82, 95% CI = 1.65 to 2.00); elementary school as highest education level completed (OR = 2.84, 95% CI = 2.48 to 3.25); being unable to work (OR = 5.32, 95% CI = 4.15 to 6.81); having no religion (OR = 2.08, 95% CI = 1.90 to 2.29); and, only for female participants, being married (OR = 1.19, 95% CI = 1.08 to 1.32) or divorced (OR = 1.66, 95% CI = 1.41 to 1.96). A family history of a suicide attempt and of a completed suicide showed the same increment in the risk of suicidal behavior. The higher the number of suicide attempts, the higher was the real intention to die (P < .05). Those who really wanted to die reported more emptiness/loneliness (OR = 1.58, 95% CI = 1.35 to 1.85), disconnection (OR = 1.54, 95% CI = 1.30 to 1.81), and hopelessness (OR = 1.74, 95% CI = 1.49 to 2.03), but their methods were not different from the methods of those who did not mean to die. This large Web survey confirmed results from previous studies on suicidal behavior and pointed out the relevance of the number of previous suicide attempts and of a positive family history, even for a noncompleted suicide, as important risk factors. © Copyright 2015 Physicians Postgraduate Press, Inc.

  8. Reconstruction of high-dimensional states entangled in orbital angular momentum using mutually unbiased measurements

    CSIR Research Space (South Africa)

    Giovannini, D

    2013-06-01

    Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...

  9. Encoding mutually unbiased bases in orbital angular momentum for quantum key distribution

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2013-07-01

    Full Text Available We encode mutually unbiased bases (MUBs) using the higher-dimensional orbital angular momentum (OAM) degree of freedom associated with optical fields. We illustrate how these states are encoded with the use of a spatial light modulator (SLM). We...

  10. An Unbiased Survey of 500 Nearby Stars for Debris Disks: A JCMT Legacy Program

    NARCIS (Netherlands)

    Matthews, B.C.; Greaves, J.S.; Holland, W.S.; Wyatt, M.C.; Barlow, M.J.; Bastien, P.; Beichman, C.A.; Biggs, A.; Butner, H.M.; Dent, W.R.F.; Francesco, J. Di; Dominik, C.; Fissel, L.; Friberg, P.; Gibb, A.G.; Halpern, M.; Ivison, R.J.; Jayawardhana, R.; Jenness, T.; Johnstone, D.; Kavelaars, J.J.; Marshall, J.L.; Phillips, N.; Schieven, G.; Snellen, I.A.G.; Walker, H.J.; Ward-Thompson, D.; Weferling, B.; White, G.J.; Yates, J.; Zhu, M.; Craigon, A.

    2007-01-01

    We present the scientific motivation and observing plan for an upcoming detection survey for debris disks using the James Clerk Maxwell Telescope. The SCUBA-2 Unbiased Nearby Stars (SUNS) survey will observe 500 nearby main-sequence and subgiant stars (100 of each of the A, F, G, K, and M spectral

  11. An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index

    DEFF Research Database (Denmark)

    Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle

    2013-01-01

    We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...

  12. Automated and unbiased classification of chemical profiles from fungi using high performance liquid chromatography

    DEFF Research Database (Denmark)

    Hansen, Michael Edberg; Andersen, Birgitte; Smedsgaard, Jørn

    2005-01-01

    In this paper we present a method for unbiased/unsupervised classification and identification of closely related fungi, using chemical analysis of secondary metabolite profiles created by HPLC with UV diode array detection. For two chromatographic data matrices a vector of locally aligned full sp...

  13. Automated and unbiased image analyses as tools in phenotypic classification of small-spored Alternaria species

    DEFF Research Database (Denmark)

    Andersen, Birgitte; Hansen, Michael Edberg; Smedsgaard, Jørn

    2005-01-01

    often has been broadly applied to various morphologically and chemically distinct groups of isolates from different hosts. The purpose of this study was to develop and evaluate automated and unbiased image analysis systems that will analyze different phenotypic characters and facilitate testing...

  14. Unbiased stereological methods used for the quantitative evaluation of guided bone regeneration

    DEFF Research Database (Denmark)

    Aaboe, Else Merete; Pinholt, E M; Schou, S

    1998-01-01

    The present study describes the use of unbiased stereological methods for the quantitative evaluation of the amount of regenerated bone. Using the principle of guided bone regeneration the amount of regenerated bone after placement of degradable or non-degradable membranes covering defects...

  15. The Effects of Organizational Justice on Positive Organizational Behavior: Evidence from a Large-Sample Survey and a Situational Experiment

    Science.gov (United States)

    Pan, Xiaofu; Chen, Mengyan; Hao, Zhichao; Bi, Wenfen

    2018-01-01

    Employees' positive organizational behavior (POB) is not only to promote organizational function but also improve individual and organizational performance. As an important concept in organizational research, organizational justice is thought to be a universal predictor of employee and organizational outcomes. The current set of two studies examined the effects of organizational justice (OJ) on POB of employees with two different studies, a large-sample survey and a situational experiment. In study 1, a total of 2,566 employees from 45 manufacturing enterprises completed paper-and-pencil questionnaires assessing organizational justice (OJ) and positive organizational behavior (POB) of employees. In study 2, 747 employees were randomly sampled to participate in the situational experiment with 2 × 2 between-subjects design. They were asked to read one of the four situational stories and to image that this situation happen to the person in the story or them, and then they were asked to imagine how the person in the story or they would have felt and what the person or they subsequently would have done. The results of study 1 suggested that OJ was correlated with POB of employees and OJ is a positive predictor of POB. The results of study 2 suggested that OJ had significant effects on POB and negative organizational behavior (NOB). Procedural justice accounted for significantly more variance than distributive justice in POB of employees. Distributive justice and procedural justice have different influences on POB and NOB in terms of effectiveness and direction. The effect of OJ on POB was greater than that of NOB. In addition, path analysis indicated that the direct effect of OJ on POB was smaller than its indirect effect. Thus, many intermediary effects could possibly be between them. PMID:29375434

  16. The Effects of Organizational Justice on Positive Organizational Behavior: Evidence from a Large-Sample Survey and a Situational Experiment.

    Science.gov (United States)

    Pan, Xiaofu; Chen, Mengyan; Hao, Zhichao; Bi, Wenfen

    2017-01-01

    Employees' positive organizational behavior (POB) is not only to promote organizational function but also improve individual and organizational performance. As an important concept in organizational research, organizational justice is thought to be a universal predictor of employee and organizational outcomes. The current set of two studies examined the effects of organizational justice (OJ) on POB of employees with two different studies, a large-sample survey and a situational experiment. In study 1, a total of 2,566 employees from 45 manufacturing enterprises completed paper-and-pencil questionnaires assessing organizational justice (OJ) and positive organizational behavior (POB) of employees. In study 2, 747 employees were randomly sampled to participate in the situational experiment with 2 × 2 between-subjects design. They were asked to read one of the four situational stories and to image that this situation happen to the person in the story or them, and then they were asked to imagine how the person in the story or they would have felt and what the person or they subsequently would have done. The results of study 1 suggested that OJ was correlated with POB of employees and OJ is a positive predictor of POB. The results of study 2 suggested that OJ had significant effects on POB and negative organizational behavior (NOB). Procedural justice accounted for significantly more variance than distributive justice in POB of employees. Distributive justice and procedural justice have different influences on POB and NOB in terms of effectiveness and direction. The effect of OJ on POB was greater than that of NOB. In addition, path analysis indicated that the direct effect of OJ on POB was smaller than its indirect effect. Thus, many intermediary effects could possibly be between them.

  17. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Adolescent Internet Abuse: A Study on the Role of Attachment to Parents and Peers in a Large Community Sample.

    Science.gov (United States)

    Ballarotto, Giulia; Volpi, Barbara; Marzilli, Eleonora; Tambelli, Renata

    2018-01-01

    Adolescents are the main users of new technologies and their main purpose of use is social interaction. Although new technologies are useful to teenagers, in addressing their developmental tasks, recent studies have shown that they may be an obstacle in their growth. Research shows that teenagers with Internet addiction experience lower quality in their relationships with parents and more individual difficulties. However, limited research is available on the role played by adolescents' attachment to parents and peers, considering their psychological profiles. We evaluated in a large community sample of adolescents ( N = 1105) the Internet use/abuse, the adolescents' attachment to parents and peers, and their psychological profiles. Hierarchical regression analyses were conducted to verify the influence of parental and peer attachment on Internet use/abuse, considering the moderating effect of adolescents' psychopathological risk. Results showed that adolescents' attachment to parents had a significant effect on Internet use. Adolescents' psychopathological risk had a moderating effect on the relationship between attachment to mothers and Internet use. Our study shows that further research is needed, taking into account both individual and family variables.

  19. Adolescent Internet Abuse: A Study on the Role of Attachment to Parents and Peers in a Large Community Sample

    Directory of Open Access Journals (Sweden)

    Giulia Ballarotto

    2018-01-01

    Full Text Available Adolescents are the main users of new technologies and their main purpose of use is social interaction. Although new technologies are useful to teenagers, in addressing their developmental tasks, recent studies have shown that they may be an obstacle in their growth. Research shows that teenagers with Internet addiction experience lower quality in their relationships with parents and more individual difficulties. However, limited research is available on the role played by adolescents’ attachment to parents and peers, considering their psychological profiles. We evaluated in a large community sample of adolescents (N=1105 the Internet use/abuse, the adolescents’ attachment to parents and peers, and their psychological profiles. Hierarchical regression analyses were conducted to verify the influence of parental and peer attachment on Internet use/abuse, considering the moderating effect of adolescents’ psychopathological risk. Results showed that adolescents’ attachment to parents had a significant effect on Internet use. Adolescents’ psychopathological risk had a moderating effect on the relationship between attachment to mothers and Internet use. Our study shows that further research is needed, taking into account both individual and family variables.

  20. Examining gray matter structures associated with individual differences in global life satisfaction in a large sample of young adults.

    Science.gov (United States)

    Kong, Feng; Ding, Ke; Yang, Zetian; Dang, Xiaobin; Hu, Siyuan; Song, Yiying; Liu, Jia

    2015-07-01

    Although much attention has been directed towards life satisfaction that refers to an individual's general cognitive evaluations of his or her life as a whole, little is known about the neural basis underlying global life satisfaction. In this study, we used voxel-based morphometry to investigate the structural neural correlates of life satisfaction in a large sample of young healthy adults (n = 299). We showed that individuals' life satisfaction was positively correlated with the regional gray matter volume (rGMV) in the right parahippocampal gyrus (PHG), and negatively correlated with the rGMV in the left precuneus and left ventromedial prefrontal cortex. This pattern of results remained significant even after controlling for the effect of general positive and negative affect, suggesting a unique structural correlates of life satisfaction. Furthermore, we found that self-esteem partially mediated the association between the PHG volume and life satisfaction as well as that between the precuneus volume and global life satisfaction. Taken together, we provide the first evidence for the structural neural basis of life satisfaction, and highlight that self-esteem might play a crucial role in cultivating an individual's life satisfaction. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. Psychological Predictors of Seeking Help from Mental Health Practitioners among a Large Sample of Polish Young Adults

    Directory of Open Access Journals (Sweden)

    Lidia Perenc

    2016-10-01

    Full Text Available Although the corresponding literature contains a substantial number of studies on the relationship between psychological factors and attitude towards seeking professional psychological help, the role of some determinants remains unexplored, especially among Polish young adults. The present study investigated diversity among a large cohort of Polish university students related to attitudes towards help-seeking and the regulative roles of gender, level of university education, health locus of control and sense of coherence. The total sample comprised 1706 participants who completed the following measures: Attitude Toward Seeking Professional Psychological Help Scale-SF, Multidimensional Health Locus of Control Scale, and Orientation to Life Questionnaire (SOC-29. They were recruited from various university faculties and courses by means of random selection. The findings revealed that, among socio-demographic variables, female gender moderately and graduate of university study strongly predict attitude towards seeking help. Internal locus of control and all domains of sense of coherence are significantly correlated with the scores related to the help-seeking attitude. Attitudes toward psychological help-seeking are significantly related to female gender, graduate university education, internal health locus of control and sense of coherence. Further research must be performed in Poland in order to validate these results in different age and social groups.

  2. Examining gray matter structures associated with individual differences in global life satisfaction in a large sample of young adults

    Science.gov (United States)

    Kong, Feng; Ding, Ke; Yang, Zetian; Dang, Xiaobin; Hu, Siyuan; Song, Yiying

    2015-01-01

    Although much attention has been directed towards life satisfaction that refers to an individual’s general cognitive evaluations of his or her life as a whole, little is known about the neural basis underlying global life satisfaction. In this study, we used voxel-based morphometry to investigate the structural neural correlates of life satisfaction in a large sample of young healthy adults (n = 299). We showed that individuals’ life satisfaction was positively correlated with the regional gray matter volume (rGMV) in the right parahippocampal gyrus (PHG), and negatively correlated with the rGMV in the left precuneus and left ventromedial prefrontal cortex. This pattern of results remained significant even after controlling for the effect of general positive and negative affect, suggesting a unique structural correlates of life satisfaction. Furthermore, we found that self-esteem partially mediated the association between the PHG volume and life satisfaction as well as that between the precuneus volume and global life satisfaction. Taken together, we provide the first evidence for the structural neural basis of life satisfaction, and highlight that self-esteem might play a crucial role in cultivating an individual’s life satisfaction. PMID:25406366

  3. The Kinematics of the Permitted C ii λ 6578 Line in a Large Sample of Planetary Nebulae

    Energy Technology Data Exchange (ETDEWEB)

    Richer, Michael G.; Suárez, Genaro; López, José Alberto; García Díaz, María Teresa, E-mail: richer@astrosen.unam.mx, E-mail: gsuarez@astro.unam.mx, E-mail: jal@astrosen.unam.mx, E-mail: tere@astro.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, Ensenada, Baja California (Mexico)

    2017-03-01

    We present spectroscopic observations of the C ii λ 6578 permitted line for 83 lines of sight in 76 planetary nebulae at high spectral resolution, most of them obtained with the Manchester Echelle Spectrograph on the 2.1 m telescope at the Observatorio Astronómico Nacional on the Sierra San Pedro Mártir. We study the kinematics of the C ii λ 6578 permitted line with respect to other permitted and collisionally excited lines. Statistically, we find that the kinematics of the C ii λ 6578 line are not those expected if this line arises from the recombination of C{sup 2+} ions or the fluorescence of C{sup +} ions in ionization equilibrium in a chemically homogeneous nebular plasma, but instead its kinematics are those appropriate for a volume more internal than expected. The planetary nebulae in this sample have well-defined morphology and are restricted to a limited range in H α line widths (no large values) compared to their counterparts in the Milky Way bulge; both these features could be interpreted as the result of young nebular shells, an inference that is also supported by nebular modeling. Concerning the long-standing discrepancy between chemical abundances inferred from permitted and collisionally excited emission lines in photoionized nebulae, our results imply that multiple plasma components occur commonly in planetary nebulae.

  4. Neuronal correlates of the five factor model (FFM) of human personality: Multimodal imaging in a large healthy sample.

    Science.gov (United States)

    Bjørnebekk, Astrid; Fjell, Anders M; Walhovd, Kristine B; Grydeland, Håkon; Torgersen, Svenn; Westlye, Lars T

    2013-01-15

    Advances in neuroimaging techniques have recently provided glimpse into the neurobiology of complex traits of human personality. Whereas some intriguing findings have connected aspects of personality to variations in brain morphology, the relations are complex and our current understanding is incomplete. Therefore, we aimed to provide a comprehensive investigation of brain-personality relations using a multimodal neuroimaging approach in a large sample comprising 265 healthy individuals. The NEO Personality Inventory was used to provide measures of core aspects of human personality, and imaging phenotypes included measures of total and regional brain volumes, regional cortical thickness and arealization, and diffusion tensor imaging indices of white matter (WM) microstructure. Neuroticism was the trait most clearly linked to brain structure. Higher neuroticism including facets reflecting anxiety, depression and vulnerability to stress was associated with smaller total brain volume, widespread decrease in WM microstructure, and smaller frontotemporal surface area. Higher scores on extraversion were associated with thinner inferior frontal gyrus, and conscientiousness was negatively associated with arealization of the temporoparietal junction. No reliable associations between brain structure and agreeableness and openness, respectively, were found. The results provide novel evidence of the associations between brain structure and variations in human personality, and corroborate previous findings of a consistent neuroanatomical basis of negative emotionality. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Metabolic fingerprinting of fresh lymphoma samples used to discriminate between follicular and diffuse large B-cell lymphomas.

    Science.gov (United States)

    Barba, Ignasi; Sanz, Carolina; Barbera, Angels; Tapia, Gustavo; Mate, José-Luis; Garcia-Dorado, David; Ribera, Josep-Maria; Oriol, Albert

    2009-11-01

    To investigate if proton nuclear magnetic resonance ((1)H NMR) spectroscopy-based metabolic profiling was able to differentiate follicular lymphoma (FL) from diffuse large B-cell lymphoma (DLBCL) and to study which metabolites were responsible for the differences. High-resolution (1)H NMR spectra was obtained from fresh samples of lymph node biopsies obtained consecutively at one center (14 FL and 17 DLBCL). Spectra were processed using pattern-recognition methods. Discriminant models were able to differentiate between the two tumor types with a 86% sensitivity and a 76% specificity; the metabolites that most contributed to the discrimination were a relative increase of alanine in the case of DLBCL and a relative increase of taurine in FL. Metabolic models had a significant but weak correlation with Ki67 expression (r(2)=0.42; p=0.002) We have proved that it is possible to differentiate between FL and DLBCL based on their NMR metabolic profiles. This approach may potentially be applicable as a noninvasive tool for diagnostic and treatment follow-up in the clinical setting using conventional magnetic resonance systems.

  6. Large area gridded ionisation chamber and electrostatic precipitator. Application to low-level alphaspectrometry of environmental air samples

    International Nuclear Information System (INIS)

    Hoetzl, H.; Winkler, R.

    1978-01-01

    A high-resolution, parallel plate Frisch grid ionisation chamber with an efficient area of 300 cm 2 and a large area electrostatic precipitator were developed and applied to direct alpha-particle spectrometry of air dust. The aerosols were deposited on circular tin-plate dishes of 300 cm 2 by the electrostatic precipitator, which was constructed for continuous operation at an air flow rate of 2 m 3 /h. Collection efficiency is found to be 0.78 for the natural Rn- and Tn-daughter products. Using an argon-methane mixture (P-10 gas) at atmospheric pressure, the resolution of the detector system is 22 keV fwhm at 5.15 MeV. The integral background is typically 15.7 counts/h between 4 and 6 MeV. After sampling for one week and decay of short-lived natural activity, the sensitivity of the procedure for long-lived alpha-emitters is about 0.1 fCi/m 3 based on 3s of background as detection limit and 1000 min counting time. (Auth.)

  7. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  8. Psychometric support of the school climate measure in a large, diverse sample of adolescents: a replication and extension.

    Science.gov (United States)

    Zullig, Keith J; Collins, Rani; Ghani, Nadia; Patton, Jon M; Scott Huebner, E; Ajamie, Jean

    2014-02-01

    The School Climate Measure (SCM) was developed and validated in 2010 in response to a dearth of psychometrically sound school climate instruments. This study sought to further validate the SCM on a large, diverse sample of Arizona public school adolescents (N = 20,953). Four SCM domains (positive student-teacher relationships, academic support, order and discipline, and physical environment) were available for the analysis. Confirmatory factor analysis and structural equation modeling were established to construct validity, and criterion-related validity was assessed via selected Youth Risk Behavior Survey (YRBS) school safety items and self-reported grade (GPA) point average. Analyses confirmed the 4 SCM school climate domains explained approximately 63% of the variance (factor loading range .45-.92). Structural equation models fit the data well χ(2) = 14,325 (df = 293, p < .001), comparative fit index (CFI) = .951, Tuker-Lewis index (TLI) = .952, root mean square error of approximation (RMSEA) = .05). The goodness-of-fit index was .940. Coefficient alphas ranged from .82 to .93. Analyses of variance with post hoc comparisons suggested the SCM domains related in hypothesized directions with the school safety items and GPA. Additional evidence supports the validity and reliability of the SCM. Measures, such as the SCM, can facilitate data-driven decisions and may be incorporated into evidenced-based processes designed to improve student outcomes. © 2014, American School Health Association.

  9. Does Shyness Vary According to Attained Social Roles? Trends Across Age Groups in a Large British Sample.

    Science.gov (United States)

    Van Zalk, Nejra; Lamb, Michael E; Jason Rentfrow, Peter

    2017-12-01

    The current study investigated (a) how a composite measure of shyness comprising introversion and neuroticism relates to other well-known constructs involving social fears, and (b) whether mean levels of shyness vary for men and women depending on the adoption of various social roles. Study 1 used a sample of 211 UK participants aged 17-70 (64% female; M age  = 47.90). Study 2 used data from a large cross-sectional data set with UK participants aged 17-70 (N target  = 552,663; 64% female; M age  = 34.19 years). Study 1 showed that shyness measured as a composite of introversion and neuroticism was highly correlated with other constructs involving social fears. Study 2 indicated that, controlling for various sociodemographic variables, females appeared to have higher levels, whereas males appeared to have lower levels of shyness. Males and females who were in employment had the lowest shyness levels, whereas those working in unskilled jobs had the highest levels and people working in sales the lowest levels of shyness. Participants in relationships had lower levels of shyness than those not in relationships, but parenthood was not associated with shyness. Mean levels of shyness are likely to vary according to adopted social roles, gender, and age. © 2016 Wiley Periodicals, Inc.

  10. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  11. Characteristics of Beverage Consumption Habits among a Large Sample of French Adults: Associations with Total Water and Energy Intakes

    Directory of Open Access Journals (Sweden)

    Fabien Szabo de Edelenyi

    2016-10-01

    Full Text Available Background: Adequate hydration is a key factor for correct functioning of both cognitive and physical processes. In France, public health recommendations about adequate total water intake (TWI only state that fluid intake should be sufficient, with particular attention paid to hydration for seniors, especially during heatwave periods. The objective of this study was to calculate the total amount of water coming from food and beverages and to analyse characteristics of consumption in participants from a large French national cohort. Methods: TWI, as well as contribution of food and beverages to TWI was assessed among 94,939 adult participants in the Nutrinet-Santé cohort (78% women, mean age 42.9 (SE 0.04 using three 24-h dietary records at baseline. Statistical differences in water intakes across age groups, seasons and day of the week were assessed. Results: The mean TWI was 2.3 L (Standard Error SE 4.7 for men and 2.1 L (SE 2.4 for women. A majority of the sample did comply with the European Food Safety Authority (EFSA adequate intake recommendation, especially women. Mean total energy intake (EI was 1884 kcal/day (SE 1.5 (2250 kcal/day (SE 3.6 for men and 1783 kcal/day (SE 1.5 for women. The contribution to the total EI from beverages was 8.3%. Water was the most consumed beverage, followed by hot beverages. The variety score, defined as the number of different categories of beverages consumed during the three 24-h records out of a maximum of 8, was positively correlated with TWI (r = 0.4; and with EI (r = 0.2, suggesting that beverage variety is an indicator of higher consumption of food and drinks. We found differences in beverage consumptions and water intakes according to age and seasonality. Conclusions: The present study gives an overview of the water intake characteristics in a large population of French adults. TWI was found to be globally in line with public health recommendations.

  12. CONNECTING GRBs AND ULIRGs: A SENSITIVE, UNBIASED SURVEY FOR RADIO EMISSION FROM GAMMA-RAY BURST HOST GALAXIES AT 0 < z < 2.5

    Energy Technology Data Exchange (ETDEWEB)

    Perley, D. A. [Department of Astronomy, California Institute of Technology, MC 249-17, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Perley, R. A. [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States); Hjorth, J.; Malesani, D. [Dark Cosmology Centre, Niels Bohr Institute, DK-2100 Copenhagen (Denmark); Michałowski, M. J. [Scottish Universities Physics Alliance, Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, EH9 3HJ (United Kingdom); Cenko, S. B. [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Jakobsson, P. [Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, 107 Reykjavík (Iceland); Krühler, T. [European Southern Observatory, Alonso de Córdova 3107, Vitacura, Casilla 19001, Santiago 19 (Chile); Levan, A. J. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Tanvir, N. R., E-mail: dperley@astro.caltech.edu [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom)

    2015-03-10

    Luminous infrared galaxies and submillimeter galaxies contribute significantly to stellar mass assembly and provide an important test of the connection between the gamma-ray burst (GRB) rate and that of overall cosmic star formation. We present sensitive 3 GHz radio observations using the Karl G. Jansky Very Large Array of 32 uniformly selected GRB host galaxies spanning a redshift range from 0 < z < 2.5, providing the first fully dust- and sample-unbiased measurement of the fraction of GRBs originating from the universe's most bolometrically luminous galaxies. Four galaxies are detected, with inferred radio star formation rates (SFRs) ranging between 50 and 300 M {sub ☉} yr{sup –1}. Three of the four detections correspond to events consistent with being optically obscured 'dark' bursts. Our overall detection fraction implies that between 9% and 23% of GRBs between 0.5 < z < 2.5 occur in galaxies with S {sub 3GHz} > 10 μJy, corresponding to SFR > 50 M {sub ☉} yr{sup –1} at z ∼ 1 or >250 M {sub ☉} yr{sup –1} at z ∼ 2. Similar galaxies contribute approximately 10%-30% of all cosmic star formation, so our results are consistent with a GRB rate that is not strongly biased with respect to the total SFR of a galaxy. However, all four radio-detected hosts have stellar masses significantly lower than IR/submillimeter-selected field galaxies of similar luminosities. We suggest that the GRB rate may be suppressed in metal-rich environments but independently enhanced in intense starbursts, producing a strong efficiency dependence on mass but little net dependence on bulk galaxy SFR.

  13. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Sidles, John A; Jacky, Jonathan P [Department of Orthopaedics and Sports Medicine, Box 356500, School of Medicine, University of Washington, Seattle, WA, 98195 (United States); Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M [Department of Mechanical Engineering, University of Washington, Seattle, WA 98195 (United States); Harrell, Lee E [Department of Physics, US Military Academy, West Point, NY 10996 (United States); Hero, Alfred O [Department of Electrical Engineering, University of Michigan, MI 49931 (United States); Norman, Anthony G [Department of Bioengineering, University of Washington, Seattle, WA 98195 (United States)], E-mail: sidles@u.washington.edu

    2009-06-15

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  14. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    International Nuclear Information System (INIS)

    Sidles, John A; Jacky, Jonathan P; Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M; Harrell, Lee E; Hero, Alfred O; Norman, Anthony G

    2009-01-01

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  15. Practical recipes for the model order reduction, dynamical simulation and compressive sampling of large-scale open quantum systems

    Science.gov (United States)

    Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.

    2009-06-01

    Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.

  16. Sleep in a large, multi-university sample of college students: sleep problem prevalence, sex differences, and mental health correlates.

    Science.gov (United States)

    Becker, Stephen P; Jarrett, Matthew A; Luebbe, Aaron M; Garner, Annie A; Burns, G Leonard; Kofler, Michael J

    2018-04-01

    To (1) describe sleep problems in a large, multi-university sample of college students; (2) evaluate sex differences; and (3) examine the unique associations of mental health symptoms (i.e., anxiety, depression, attention-deficit/hyperactivity disorder inattention [ADHD-IN], ADHD hyperactivity-impulsivity [ADHD-HI]) in relation to sleep problems. 7,626 students (70% female; 81% White) ages 18-29 years (M=19.14, SD=1.42) from six universities completed measures assessing mental health symptoms and the Pittsburgh Sleep Quality Index (PSQI). A substantial minority of students endorsed sleep problems across specific sleep components. Specifically, 27% described their sleep quality as poor, 36% reported obtaining less than 7 hours of sleep per night, and 43% reported that it takes >30 minutes to fall asleep at least once per week. 62% of participants met cut-off criteria for poor sleep, though rates differed between females (64%) and males (57%). In structural regression models, both anxiety and depression symptoms were uniquely associated with disruptions in most PSQI sleep component domains. However, anxiety (but not depression) symptoms were uniquely associated with more sleep disturbances and sleep medication use, whereas depression (but not anxiety) symptoms were uniquely associated with increased daytime dysfunction. ADHD-IN symptoms were uniquely associated with poorer sleep quality and increased daytime dysfunction, whereas ADHD-HI symptoms were uniquely associated with more sleep disturbances and less daytime dysfunction. Lastly, ADHD-IN, anxiety, and depression symptoms were each independently associated with poor sleep status. This study documents a high prevalence of poor sleep among college students, some sex differences, and distinct patterns of mental health symptoms in relation to sleep problems. Copyright © 2018. Published by Elsevier Inc.

  17. Mind-Body Practice and Body Weight Status in a Large Population-Based Sample of Adults.

    Science.gov (United States)

    Camilleri, Géraldine M; Méjean, Caroline; Bellisle, France; Hercberg, Serge; Péneau, Sandrine

    2016-04-01

    In industrialized countries characterized by a high prevalence of obesity and chronic stress, mind-body practices such as yoga or meditation may facilitate body weight control. However, virtually no data are available to ascertain whether practicing mind-body techniques is associated with weight status. The purpose of this study is to examine the relationship between the practice of mind-body techniques and weight status in a large population-based sample of adults. A total of 61,704 individuals aged ≥18 years participating in the NutriNet-Santé study (2009-2014) were included in this cross-sectional analysis conducted in 2014. Data on mind-body practices were collected, as well as self-reported weight and height. The association between the practice of mind-body techniques and weight status was assessed using multiple linear and multinomial logistic regression models adjusted for sociodemographic, lifestyle, and dietary factors. After adjusting for sociodemographic and lifestyle factors, regular users of mind-body techniques were less likely to be overweight (OR=0.68, 95% CI=0.63, 0.74) or obese (OR=0.55, 95% CI=0.50, 0.61) than never users. In addition, regular users had a lower BMI than never users (-3.19%, 95% CI=-3.71, -2.68). These data provide novel information about an inverse relationship between mind-body practice and weight status. If causal links were demonstrated in further prospective studies, such practice could be fostered in obesity prevention and treatment. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  18. Mutually unbiased coarse-grained measurements of two or more phase-space variables

    Science.gov (United States)

    Paul, E. C.; Walborn, S. P.; Tasca, D. S.; Rudnicki, Łukasz

    2018-05-01

    Mutual unbiasedness of the eigenstates of phase-space operators—such as position and momentum, or their standard coarse-grained versions—exists only in the limiting case of infinite squeezing. In Phys. Rev. Lett. 120, 040403 (2018), 10.1103/PhysRevLett.120.040403, it was shown that mutual unbiasedness can be recovered for periodic coarse graining of these two operators. Here we investigate mutual unbiasedness of coarse-grained measurements for more than two phase-space variables. We show that mutual unbiasedness can be recovered between periodic coarse graining of any two nonparallel phase-space operators. We illustrate these results through optics experiments, using the fractional Fourier transform to prepare and measure mutually unbiased phase-space variables. The differences between two and three mutually unbiased measurements is discussed. Our results contribute to bridging the gap between continuous and discrete quantum mechanics, and they could be useful in quantum-information protocols.

  19. An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data

    DEFF Research Database (Denmark)

    Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira

    2011-01-01

    than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....

  20. Monofunctional stealth nanoparticle for unbiased single molecule tracking inside living cells.

    Science.gov (United States)

    Lisse, Domenik; Richter, Christian P; Drees, Christoph; Birkholz, Oliver; You, Changjiang; Rampazzo, Enrico; Piehler, Jacob

    2014-01-01

    On the basis of a protein cage scaffold, we have systematically explored intracellular application of nanoparticles for single molecule studies and discovered that recognition by the autophagy machinery plays a key role for rapid metabolism in the cytosol. Intracellular stealth nanoparticles were achieved by heavy surface PEGylation. By combination with a generic approach for nanoparticle monofunctionalization, efficient labeling of intracellular proteins with high fidelity was accomplished, allowing unbiased long-term tracking of proteins in the outer mitochondrial membrane.

  1. The significance of Sampling Design on Inference: An Analysis of Binary Outcome Model of Children’s Schooling Using Indonesian Large Multi-stage Sampling Data

    OpenAIRE

    Ekki Syamsulhakim

    2008-01-01

    This paper aims to exercise a rather recent trend in applied microeconometrics, namely the effect of sampling design on statistical inference, especially on binary outcome model. Many theoretical research in econometrics have shown the inappropriateness of applying i.i.dassumed statistical analysis on non-i.i.d data. These research have provided proofs showing that applying the iid-assumed analysis on a non-iid observations would result in an inflated standard errors which could make the esti...

  2. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-01

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.

  3. Losing the rose tinted glasses: neural substrates of unbiased belief updating in depression

    Directory of Open Access Journals (Sweden)

    Neil eGarrett

    2014-08-01

    Full Text Available Recent evidence suggests that a state of good mental health is associated with biased processing of information that supports a positively skewed view of the future. Depression, on the other hand, is associated with unbiased processing of such information. Here, we use brain imaging in conjunction with a belief update task administered to clinically depressed patients and healthy controls to characterize brain activity that supports unbiased belief updating in clinically depressed individuals. Our results reveal that unbiased belief updating in depression is mediated by strong neural coding of estimation errors in response to both good news (in left inferior frontal gyrus and bilateral superior frontal gyrus and bad news (in right inferior parietal lobule and right inferior frontal gyrus regarding the future. In contrast, intact mental health was linked to a relatively attenuated neural coding of bad news about the future. These findings identify a neural substrate mediating the breakdown of biased updating in Major Depression Disorder, which may be essential for mental health.

  4. About mutually unbiased bases in even and odd prime power dimensions

    Science.gov (United States)

    Durt, Thomas

    2005-06-01

    Mutually unbiased bases generalize the X, Y and Z qubit bases. They possess numerous applications in quantum information science. It is well known that in prime power dimensions N = pm (with p prime and m a positive integer), there exists a maximal set of N + 1 mutually unbiased bases. In the present paper, we derive an explicit expression for those bases, in terms of the (operations of the) associated finite field (Galois division ring) of N elements. This expression is shown to be equivalent to the expressions previously obtained by Ivanovic (1981 J. Phys. A: Math. Gen. 14 3241) in odd prime dimensions, and Wootters and Fields (1989 Ann. Phys. 191 363) in odd prime power dimensions. In even prime power dimensions, we derive a new explicit expression for the mutually unbiased bases. The new ingredients of our approach are, basically, the following: we provide a simple expression of the generalized Pauli group in terms of the additive characters of the field, and we derive an exact groupal composition law between the elements of the commuting subsets of the generalized Pauli group, renormalized by a well-chosen phase-factor.

  5. About mutually unbiased bases in even and odd prime power dimensions

    International Nuclear Information System (INIS)

    Durt, Thomas

    2005-01-01

    Mutually unbiased bases generalize the X, Y and Z qubit bases. They possess numerous applications in quantum information science. It is well known that in prime power dimensions N = p m (with p prime and m a positive integer), there exists a maximal set of N + 1 mutually unbiased bases. In the present paper, we derive an explicit expression for those bases, in terms of the (operations of the) associated finite field (Galois division ring) of N elements. This expression is shown to be equivalent to the expressions previously obtained by Ivanovic (1981 J. Phys. A: Math. Gen. 14 3241) in odd prime dimensions, and Wootters and Fields (1989 Ann. Phys. 191 363) in odd prime power dimensions. In even prime power dimensions, we derive a new explicit expression for the mutually unbiased bases. The new ingredients of our approach are, basically, the following: we provide a simple expression of the generalized Pauli group in terms of the additive characters of the field, and we derive an exact groupal composition law between the elements of the commuting subsets of the generalized Pauli group, renormalized by a well-chosen phase-factor

  6. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    International Nuclear Information System (INIS)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2

  7. airGR: an R-package suitable for large sample hydrology presenting a suite of lumped hydrological models

    Science.gov (United States)

    Thirel, G.; Delaigue, O.; Coron, L.; Perrin, C.; Andreassian, V.

    2016-12-01

    large sample hydrology experiments.

  8. Overweight and Obesity: Prevalence and Correlates in a Large Clinical Sample of Children with Autism Spectrum Disorder

    Science.gov (United States)

    Zuckerman, Katharine E.; Hill, Alison P.; Guion, Kimberly; Voltolina, Lisa; Fombonne, Eric

    2014-01-01

    Autism Spectrum Disorders (ASDs) and childhood obesity (OBY) are rising public health concerns. This study aimed to evaluate the prevalence of overweight (OWT) and OBY in a sample of 376 Oregon children with ASD, and to assess correlates of OWT and OBY in this sample. We used descriptive statistics, bivariate, and focused multivariate analyses to…

  9. Neutron multicounter detector for investigation of content and spatial distribution of fission materials in large volume samples

    International Nuclear Information System (INIS)

    Swiderska-Kowalczyk, M.; Starosta, W.; Zoltowski, T.

    1998-01-01

    The experimental device is a neutron coincidence well counter. It can be applied for passive assay of fissile - especially for plutonium bearing - materials. It consist of a set of 3 He tubes placed inside a polyethylene moderator; outputs from the tubes, first processed by preamplifier/amplifier/discriminator circuits, are then analysed using neutron correlator connected with a PC, and correlation techniques implemented in software. Such a neutron counter allows for determination of plutonium mass ( 240 Pu effective mass) in nonmultiplying samples having fairly big volume (up to 0.14 m 3 ). For determination of neutron sources distribution inside the sample, the heuristic methods based on hierarchical cluster analysis are applied. As an input parameters, amplitudes and phases of two-dimensional Fourier transformation of the count profiles matrices for known point sources distributions and for the examined samples, are taken. Such matrices are collected by means of sample scanning by detection head. During clustering process, counts profiles for unknown samples fitted into dendrograms using the 'proximity' criterion of the examined sample profile to standard samples profiles. Distribution of neutron sources in an examined sample is then evaluated on the basis of comparison with standard sources distributions. (author)

  10. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  11. Arctic-HYCOS: a Large Sample observing system for estimating freshwater fluxes in the drainage basin of the Arctic Ocean

    Science.gov (United States)

    Pietroniro, Al; Korhonen, Johanna; Looser, Ulrich; Hardardóttir, Jórunn; Johnsrud, Morten; Vuglinsky, Valery; Gustafsson, David; Lins, Harry F.; Conaway, Jeffrey S.; Lammers, Richard; Stewart, Bruce; Abrate, Tommaso; Pilon, Paul; Sighomnou, Daniel; Arheimer, Berit

    2015-04-01

    The Arctic region is an important regulating component of the global climate system, and is also experiencing a considerable change during recent decades. More than 10% of world's river-runoff flows to the Arctic Ocean and there is evidence of changes in its fresh-water balance. However, about 30% of the Arctic basin is still ungauged, with differing monitoring practices and data availability from the countries in the region. A consistent system for monitoring and sharing of hydrological information throughout the Arctic region is thus of highest interest for further studies and monitoring of the freshwater flux to the Arctic Ocean. The purpose of the Arctic-HYCOS project is to allow for collection and sharing of hydrological data. Preliminary 616 stations were identified with long-term daily discharge data available, and around 250 of these already provide online available data in near real time. This large sample will be used in the following scientific analysis: 1) to evaluate freshwater flux to the Arctic Ocean and Seas, 2) to monitor changes and enhance understanding of the hydrological regime and 3) to estimate flows in ungauged regions and develop models for enhanced hydrological prediction in the Arctic region. The project is intended as a component of the WMO (World Meteorological Organization) WHYCOS (World Hydrological Cycle Observing System) initiative, covering the area of the expansive transnational Arctic basin with participation from Canada, Denmark, Finland, Iceland, Norway, Russian Federation, Sweden and United States of America. The overall objective is to regularly collect, manage and share high quality data from a defined basic network of hydrological stations in the Arctic basin. The project focus on collecting data on discharge and possibly sediment transport and temperature. Data should be provisional in near-real time if available, whereas time-series of historical data should be provided once quality assurance has been completed. The

  12. Application of k0-based internal monostandard NAA for large sample analysis of clay pottery. As a part of inter comparison exercise

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2014-01-01

    As a part of inter comparison exercise of an IAEA Coordinated Research Project on large sample neutron activation analysis, a large size and non standard geometry size pottery replica (obtained from Peru) was analyzed by k 0 -based internal monostandard neutron activation analysis (IM-NAA). Two large size sub samples (0.40 and 0.25 kg) were irradiated at graphite reflector position of AHWR Critical Facility in BARC, Trombay, Mumbai, India. Small samples (100-200 mg) were also analyzed by IM-NAA for comparison purpose. Radioactive assay was carried out using a 40 % relative efficiency HPGe detector. To examine homogeneity of the sample, counting was also carried out using X-Z rotary scanning unit. In situ relative detection efficiency was evaluated using gamma rays of the activation products in the irradiated sample in the energy range of 122-2,754 keV. Elemental concentration ratios with respect to Na of small size (100 mg mass) as well as large size (15 and 400 g) samples were used to check the homogeneity of the samples. Concentration ratios of 18 elements such as K, Sc, Cr, Mn, Fe, Co, Zn, As, Rb, Cs, La, Ce, Sm, Eu, Yb, Lu, Hf and Th with respect to Na (internal mono standard) were calculated using IM-NAA. Absolute concentrations were arrived at for both large and small samples using Na concentration, obtained from relative method of NAA. The percentage combined uncertainties at ±1 s confidence limit on the determined values were in the range of 3-9 %. Two IAEA reference materials SL-1 and SL-3 were analyzed by IM-NAA to evaluate accuracy of the method. (author)

  13. Algorithm for computing significance levels using the Kolmogorov-Smirnov statistic and valid for both large and small samples

    Energy Technology Data Exchange (ETDEWEB)

    Kurtz, S.E.; Fields, D.E.

    1983-10-01

    The KSTEST code presented here is designed to perform the Kolmogorov-Smirnov one-sample test. The code may be used as a stand-alone program or the principal subroutines may be excerpted and used to service other programs. The Kolmogorov-Smirnov one-sample test is a nonparametric goodness-of-fit test. A number of codes to perform this test are in existence, but they suffer from the inability to provide meaningful results in the case of small sample sizes (number of values less than or equal to 80). The KSTEST code overcomes this inadequacy by using two distinct algorithms. If the sample size is greater than 80, an asymptotic series developed by Smirnov is evaluated. If the sample size is 80 or less, a table of values generated by Birnbaum is referenced. Valid results can be obtained from KSTEST when the sample contains from 3 to 300 data points. The program was developed on a Digital Equipment Corporation PDP-10 computer using the FORTRAN-10 language. The code size is approximately 450 card images and the typical CPU execution time is 0.19 s.

  14. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    Science.gov (United States)

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  15. Large sample NAA of a pottery replica utilizing thermal neutron flux at AHWR critical facility and X-Z rotary scanning unit

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Shinde, A.D.; Reddy, A.V.R.

    2013-01-01

    Large sample neutron activation analysis (LSNAA) of a clay pottery replica from Peru was carried out using low neutron flux graphite reflector position of Advanced Heavy Water Reactor (AHWR) critical facility. This work was taken up as a part of inter-comparison exercise under IAEA CRP on LSNAA of archaeological objects. Irradiated large size sample, placed on an X-Z rotary scanning unit, was assayed using a 40% relative efficiency HPGe detector. The k 0 -based internal monostandard NAA (IM-NAA) in conjunction with insitu relative detection efficiency was used to calculate concentration ratios of 12 elements with respect to Na. Analyses of both small and large size samples were carried out to check homogeneity and to arrive at absolute concentrations. (author)

  16. Large-scale prospective T cell function assays in shipped, unfrozen blood samples: experiences from the multicenter TRIGR trial.

    Science.gov (United States)

    Hadley, David; Cheung, Roy K; Becker, Dorothy J; Girgis, Rose; Palmer, Jerry P; Cuthbertson, David; Krischer, Jeffrey P; Dosch, Hans-Michael

    2014-02-01

    Broad consensus assigns T lymphocytes fundamental roles in inflammatory, infectious, and autoimmune diseases. However, clinical investigations have lacked fully characterized and validated procedures, equivalent to those of widely practiced biochemical tests with established clinical roles, for measuring core T cell functions. The Trial to Reduce Insulin-dependent diabetes mellitus in the Genetically at Risk (TRIGR) type 1 diabetes prevention trial used consecutive measurements of T cell proliferative responses in prospectively collected fresh heparinized blood samples shipped by courier within North America. In this article, we report on the quality control implications of this simple and pragmatic shipping practice and the interpretation of positive- and negative-control analytes in our assay. We used polyclonal and postvaccination responses in 4,919 samples to analyze the development of T cell immunocompetence. We have found that the vast majority of the samples were viable up to 3 days from the blood draw, yet meaningful responses were found in a proportion of those with longer travel times. Furthermore, the shipping time of uncooled samples significantly decreased both the viabilities of the samples and the unstimulated cell counts in the viable samples. Also, subject age was significantly associated with the number of unstimulated cells and T cell proliferation to positive activators. Finally, we observed a pattern of statistically significant increases in T cell responses to tetanus toxin around the timing of infant vaccinations. This assay platform and shipping protocol satisfy the criteria for robust and reproducible long-term measurements of human T cell function, comparable to those of established blood biochemical tests. We present a stable technology for prospective disease-relevant T cell analysis in immunological diseases, vaccination medicine, and measurement of herd immunity.

  17. Examination of Sex Differences in a Large Sample of Young Children with Autism Spectrum Disorder and Typical Development

    Science.gov (United States)

    Reinhardt, Vanessa P.; Wetherby, Amy M.; Schatschneider, Christopher; Lord, Catherine

    2015-01-01

    Despite consistent and substantive research documenting a large male to female ratio in Autism Spectrum Disorder (ASD), only a modest body of research exists examining sex differences in characteristics. This study examined sex differences in developmental functioning and early social communication in children with ASD as compared to children with…

  18. Automatic Trip Detection with the Dutch Mobile Mobility Panel: Towards Reliable Multiple-Week Trip Registration for Large Samples

    NARCIS (Netherlands)

    Thomas, Tom; Geurs, Karst T.; Koolwaaij, Johan; Bijlsma, Marcel E.

    2018-01-01

    This paper examines the accuracy of trip and mode choice detection of the last wave of the Dutch Mobile Mobility Panel, a large-scale three-year, smartphone-based travel survey. Departure and arrival times, origins, destinations, modes, and travel purposes were recorded during a four week period in

  19. An Unbiased Unscented Transform Based Kalman Filter for 3D Radar

    Institute of Scientific and Technical Information of China (English)

    WANGGuohong; XIUJianjuan; HEYou

    2004-01-01

    As a derivative-free alternative to the Extended Kalman filter (EKF) in the framework of state estimation, the Unscented Kalman filter (UKF) has potential applications in nonlinear filtering. By noting the fact that the unscented transform is generally biased when converting the radar measurements from spherical coordinates into Cartesian coordinates, a new filtering algorithm for 3D radar, called Unbiased unscented Kalman filter (UUKF), is proposed. The new algorithm is validated by Monte Carlo simulation runs. Simulation results show that the UUKF is more effective than the UKF, EKF and the Converted measurement Kalman filter (CMKF).

  20. Towards an unbiased, full-sky clustering search with IceCube in real time

    Energy Technology Data Exchange (ETDEWEB)

    Bernardini, Elisa; Franckowiak, Anna; Kintscher, Thomas; Kowalski, Marek; Stasik, Alexander [DESY, Zeuthen (Germany); Collaboration: IceCube-Collaboration

    2016-07-01

    The IceCube neutrino observatory is a 1 km{sup 3} detector for Cherenkov light in the ice at the South Pole. Having observed the presence of a diffuse astrophysical neutrino flux, static point source searches have come up empty handed. Thus, transient and variable objects emerge as promising, detectable source candidates. An unbiased, full-sky clustering search - run in real time - can find neutrino events with close temporal and spatial proximity. The most significant of these clusters serve as alerts to third-party observatories in order to obtain a complete picture of cosmic accelerators. The talk showcases the status and prospects of this project.

  1. Ambient air sampling for radioactive air contaminants at Los Alamos National Laboratory: A large research and development facility

    International Nuclear Information System (INIS)

    Eberhart, C.F.

    1998-01-01

    This paper describes the ambient air sampling program for collection, analysis, and reporting of radioactive air contaminants in and around Los Alamos National Laboratory (LANL). Particulate matter and water vapor are sampled continuously at more than 50 sites. These samples are collected every two weeks and then analyzed for tritium, and gross alpha, gross beta, and gamma ray radiation. The alpha, beta, and gamma measurements are used to detect unexpected radionuclide releases. Quarterly composites are analyzed for isotopes of uranium ( 234 U, 235 U, 238 U), plutonium ( 238 Pu, 239/249 Pu), and americium ( 241 Am). All of the data is stored in a relational database with hard copies as the official records. Data used to determine environmental concentrations are validated and verified before being used in any calculations. This evaluation demonstrates that the sampling and analysis process can detect tritium, uranium, plutonium, and americium at levels much less than one percent of the public dose limit of 10 millirems. The isotopic results also indicate that, except for tritium, off-site concentrations of radionuclides potentially released from LANL are similar to typical background measurements

  2. No significant association of the 5' end of neuregulin 1 and schizophrenia in a large Danish sample

    DEFF Research Database (Denmark)

    Ingason, Andrés; Søeby, Karen; Timm, Sally

    2006-01-01

    schizophrenia patients. We found that the at-risk haplotype initially reported in the Icelandic population was not found in significant excess (or = 1.4, p = 0.12). The haplotype structure in the Danish sample was similar to that of other reported in other Caucasian populations and highly different from...

  3. Self-Esteem Development across the Life Span: A Longitudinal Study with a Large Sample from Germany

    Science.gov (United States)

    Orth, Ulrich; Maes, Jürgen; Schmitt, Manfred

    2015-01-01

    The authors examined the development of self-esteem across the life span. Data came from a German longitudinal study with 3 assessments across 4 years of a sample of 2,509 individuals ages 14 to 89 years. The self-esteem measure used showed strong measurement invariance across assessments and birth cohorts. Latent growth curve analyses indicated…

  4. Determination of environmental levels of 239240Pu, 241Am, 137Cs, and 90Sr in large volume sea water samples

    International Nuclear Information System (INIS)

    Sutton, D.C.; Calderon, G.; Rosa, W.

    1976-06-01

    A method is reported for the determination of environmental levels of 239 240 Pu and 241 Am in approximately 60-liter size samples of seawater. 137 Cs and 90 Sr were also separated and determined from the same samples. The samples were collected at the sea surface and at various depths in the oceans through the facilities of the Woods Hole Oceanographic Institution. Plutonium and americium were separated from the seawater by iron hydroxide scavenging then treated with a mixture of nitric, hydrochloric, and perchloric acids. A series of anion exchange separations were used to remove interferences and purify plutonium and americium; then each was electroplated on platinum disks and measured by solid state alpha particle spectrometry. The overall chemical yields averaged 62 +- 9 and 69 +- 14 percent for 236 Pu, and 243 Am tracers, respectively. Following the iron hydroxide scavenge of the transuranics, cesium was removed from the acidified seawater matrix by adsorption onto ammonium phosphomolybdate. Cesium carrier and 137 Cs isolation was effected by ion exchange and precipitations were made using chloroplatinic acid. The samples were weighed to determine overall chemical yield then beta counted. Cesium recoveries averaged 75 +- 5 percent. After cesium was removed from the seawater matrix, the samples were neutralized with sodium hydroxide and ammonium carbonate was added to precipitate 85 Sr tracer and the mixed alkaline earth carbonates. Strontium was separated as the nitrate and scavenged by chromate and hydroxide precipitations. Yttrium-90 was allowed to build up for two weeks, then milked and precipitated as the oxalate, weighed, and beta counted. The overall chemical yields of 85 Sr tracer averaged 84 +- 16 percent. The recovery of the yttrium oxalate precipitates averaged 96 +- 3 percent

  5. Empirical Bayes Estimation of Semi-parametric Hierarchical Mixture Models for Unbiased Characterization of Polygenic Disease Architectures

    Directory of Open Access Journals (Sweden)

    Jo Nishino

    2018-04-01

    Full Text Available Genome-wide association studies (GWAS suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1. For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases.

  6. Conditional sampling technique to test the applicability of the Taylor hypothesis for the large-scale coherent structures

    Science.gov (United States)

    Hussain, A. K. M. F.

    1980-01-01

    Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.

  7. Practical recipes for the model order reduction, dynamical simulation, and compressive sampling of large-scale open quantum systems

    OpenAIRE

    Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.

    2008-01-01

    This article presents numerical recipes for simulating high-temperature and non-equilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process...

  8. HIV Risk Behaviors in the U.S. Transgender Population: Prevalence and Predictors in a Large Internet Sample

    Science.gov (United States)

    Feldman, Jamie; Romine, Rebecca Swinburne; Bockting, Walter O.

    2014-01-01

    To study the influence of gender on HIV risk, a sample of the U.S. transgender population (N = 1,229) was recruited via the Internet. HIV risk and prevalence were lower than reported in prior studies of localized, urban samples, but higher than the overall U.S. population. Findings suggest that gender nonconformity alone does not itself result in markedly higher HIV risk. Sex with nontransgender men emerged as the strongest independent predictor of unsafe sex for both male-to-female (MtF) and female-to-male (FtM) participants. These sexual relationships constitute a process that may either affirm or problematize gender identity and sexual orientation, with different emphases for MtFs and FtMs, respectively. PMID:25022491

  9. A Large-Sample Test of a Semi-Automated Clavicle Search Engine to Assist Skeletal Identification by Radiograph Comparison.

    Science.gov (United States)

    D'Alonzo, Susan S; Guyomarc'h, Pierre; Byrd, John E; Stephan, Carl N

    2017-01-01

    In 2014, a morphometric capability to search chest radiograph databases by quantified clavicle shape was published to assist skeletal identification. Here, we extend the validation tests conducted by increasing the search universe 18-fold, from 409 to 7361 individuals to determine whether there is any associated decrease in performance under these more challenging circumstances. The number of trials and analysts were also increased, respectively, from 17 to 30 skeletons, and two to four examiners. Elliptical Fourier analysis was conducted on clavicles from each skeleton by each analyst (shadowgrams trimmed from scratch in every instance) and compared to the search universe. Correctly matching individuals were found in shortlists of 10% of the sample 70% of the time. This rate is similar to, although slightly lower than, rates previously found for much smaller samples (80%). Accuracy and reliability are thereby maintained, even when the comparison system is challenged by much larger search universes. © 2016 American Academy of Forensic Sciences.

  10. Variability of Hormonal Stress Markers and Stress Responses in a Large Cross-Sectional Sample of Elephant Seals

    Science.gov (United States)

    2014-09-30

    of the hypothalamic- pituitary -adrenal (HPA) and hypothalamic- pituitary -thyroid (HPT) axes across multiple matrices. APPROACH Task 1 – Natural...performance. Hair samples will be collected from the anterior back region of seals for determination of cortisol as a measure of chronic stress...and 5 juveniles. Task 3 – TSH challenges Thyroid hormones (thyroxin, T4 and triiodothyronine, T3) are released from the thyroid gland and are

  11. Suicidal ideation while incarcerated : prevalence and correlates in a large sample of male prisoners in Flanders, Belgium

    OpenAIRE

    Favril, Louis; Vander Laenen, Freya; Vandeviver, Christophe; Audenaert, Kurt

    2017-01-01

    Prisoners constitute a high-risk group for suicide. As an early stage in the pathway leading to suicide, suicidal ideation represents an important target for prevention, yet research on this topic is scarce in general prison populations. Using a cross-sectional survey design, correlates of suicidal ideation while incarcerated were examined in a sample of 1203 male prisoners, randomly selected from 15 Flemish prisons. Overall, a lifetime history of suicidal ideation and attempts was endorsed b...

  12. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan; Bachl, Fabian E.; Lindgren, Finn; Borchers, David L.; Illian, Janine B.; Buckland, Stephen T.; Rue, Haavard; Gerrodette, Tim

    2017-01-01

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  13. Point process models for spatio-temporal distance sampling data from a large-scale survey of blue whales

    KAUST Repository

    Yuan, Yuan

    2017-12-28

    Distance sampling is a widely used method for estimating wildlife population abundance. The fact that conventional distance sampling methods are partly design-based constrains the spatial resolution at which animal density can be estimated using these methods. Estimates are usually obtained at survey stratum level. For an endangered species such as the blue whale, it is desirable to estimate density and abundance at a finer spatial scale than stratum. Temporal variation in the spatial structure is also important. We formulate the process generating distance sampling data as a thinned spatial point process and propose model-based inference using a spatial log-Gaussian Cox process. The method adopts a flexible stochastic partial differential equation (SPDE) approach to model spatial structure in density that is not accounted for by explanatory variables, and integrated nested Laplace approximation (INLA) for Bayesian inference. It allows simultaneous fitting of detection and density models and permits prediction of density at an arbitrarily fine scale. We estimate blue whale density in the Eastern Tropical Pacific Ocean from thirteen shipboard surveys conducted over 22 years. We find that higher blue whale density is associated with colder sea surface temperatures in space, and although there is some positive association between density and mean annual temperature, our estimates are consistent with no trend in density across years. Our analysis also indicates that there is substantial spatially structured variation in density that is not explained by available covariates.

  14. A large sample of Kohonen-selected SDSS quasars with weak emission lines: selection effects and statistical properties

    Science.gov (United States)

    Meusinger, H.; Balafkan, N.

    2014-08-01

    Aims: A tiny fraction of the quasar population shows remarkably weak emission lines. Several hypotheses have been developed, but the weak line quasar (WLQ) phenomenon still remains puzzling. The aim of this study was to create a sizeable sample of WLQs and WLQ-like objects and to evaluate various properties of this sample. Methods: We performed a search for WLQs in the spectroscopic data from the Sloan Digital Sky Survey Data Release 7 based on Kohonen self-organising maps for nearly 105 quasar spectra. The final sample consists of 365 quasars in the redshift range z = 0.6 - 4.2 (z¯ = 1.50 ± 0.45) and includes in particular a subsample of 46 WLQs with equivalent widths WMg iiattention was paid to selection effects. Results: The WLQs have, on average, significantly higher luminosities, Eddington ratios, and accretion rates. About half of the excess comes from a selection bias, but an intrinsic excess remains probably caused primarily by higher accretion rates. The spectral energy distribution shows a bluer continuum at rest-frame wavelengths ≳1500 Å. The variability in the optical and UV is relatively low, even taking the variability-luminosity anti-correlation into account. The percentage of radio detected quasars and of core-dominant radio sources is significantly higher than for the control sample, whereas the mean radio-loudness is lower. Conclusions: The properties of our WLQ sample can be consistently understood assuming that it consists of a mix of quasars at the beginning of a stage of increased accretion activity and of beamed radio-quiet quasars. The higher luminosities and Eddington ratios in combination with a bluer spectral energy distribution can be explained by hotter continua, i.e. higher accretion rates. If quasar activity consists of subphases with different accretion rates, a change towards a higher rate is probably accompanied by an only slow development of the broad line region. The composite WLQ spectrum can be reasonably matched by the

  15. Method for Determination of Neptunium in Large-Sized Urine Samples Using Manganese Dioxide Coprecipitation and 242Pu as Yield Tracer

    DEFF Research Database (Denmark)

    Qiao, Jixin; Hou, Xiaolin; Roos, Per

    2013-01-01

    A novel method for bioassay of large volumes of human urine samples using manganese dioxide coprecipitation for preconcentration was developed for rapid determination of 237Np. 242Pu was utilized as a nonisotopic tracer to monitor the chemical yield of 237Np. A sequential injection extraction chr...... and rapid analysis of neptunium contamination level for emergency preparedness....

  16. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    Science.gov (United States)

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  17. Construct validity of the Groningen Frailty Indicator established in a large sample of home-dwelling elderly persons : Evidence of stability across age and gender

    NARCIS (Netherlands)

    Peters, L. L.; Boter, H.; Burgerhof, J. G. M.; Slaets, J. P. J.; Buskens, E.

    Background: The primary objective of the present study was to evaluate the validity of the Groningen frailty Indicator (GFI) in a sample of Dutch elderly persons participating in LifeLines, a large population-based cohort study. Additional aims were to assess differences between frail and non-frail

  18. Large Country-Lot Quality Assurance Sampling : A New Method for Rapid Monitoring and Evaluation of Health, Nutrition and Population Programs at Sub-National Levels

    OpenAIRE

    Hedt, Bethany L.; Olives, Casey; Pagano, Marcello; Valadez, Joseph J.

    2008-01-01

    Sampling theory facilitates development of economical, effective and rapid measurement of a population. While national policy maker value survey results measuring indicators representative of a large area (a country, state or province), measurement in smaller areas produces information useful for managers at the local level. It is often not possible to disaggregate a national survey to obt...

  19. Evidence from a Large Sample on the Effects of Group Size and Decision-Making Time on Performance in a Marketing Simulation Game

    Science.gov (United States)

    Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael

    2016-01-01

    Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…

  20. Effect of NaOH on large-volume sample stacking of haloacetic acids in capillary zone electrophoresis with a low-pH buffer.

    Science.gov (United States)

    Tu, Chuanhong; Zhu, Lingyan; Ang, Chay Hoon; Lee, Hian Kee

    2003-06-01

    Large-volume sample stacking (LVSS) is an effective on-capillary sample concentration method in capillary zone electrophoresis, which can be applied to the sample in a low-conductivity matrix. NaOH solution is commonly used to back-extract acidic compounds from organic solvent in sample pretreatment. The effect of NaOH as sample matrix on LVSS of haloacetic acids was investigated in this study. It was found that the presence of NaOH in sample did not compromise, but rather help the sample stacking performance if a low pH background electrolyte (BGE) was used. The sensitivity enhancement factor was higher than the case when sample was dissolved in pure water or diluted BGE. Compared with conventional injection (0.4% capillary volume), 97-120-fold sensitivity enhancement in terms of peak height was obtained without deterioration of separation with an injection amount equal to 20% of the capillary volume. This method was applied to determine haloacetic acids in tap water by combination with liquid-liquid extraction and back-extraction into NaOH solution. Limits of detection at sub-ppb levels were obtained for real samples with direct UV detection.

  1. Directed transport in a periodic tube driven by asymmetric unbiased forces coexisting with spatially modulated noises

    International Nuclear Information System (INIS)

    Li Fengguo; Ai Baoquan

    2011-01-01

    Graphical abstract: The current J as a function of the phase shift φ and ε at a = 1/2π, b = 0.5/2π, k B T = 0.5, α = 0.1, and F 0 = 0.5. Highlights: → Unbiased forces and spatially modulated white noises affect the current. → In the adiabatic limit, the analytical expression of directed current is obtained. → Their competition will induce current reversals. → For negative asymmetric parameters of the force, there exists an optimum parameter. → The current increases monotonously for positive asymmetric parameters. - Abstract: Transport of Brownian particles in a symmetrically periodic tube is investigated in the presence of asymmetric unbiased external forces and spatially modulated Gaussian white noises. In the adiabatic limit, we obtain the analytical expression of the directed current. It is found that the temporal asymmetry can break thermodynamic equilibrium and induce a net current. Their competition between the temporal asymmetry force and the phase shift between the noise modulation and the tube shape will induce some peculiar phenomena, for example, current reversals. The current changes with the phase shift in the form of the sine function. For negative asymmetric parameters of the force, there exists an optimum parameter at which the current takes its maximum value. However, the current increases monotonously for positive asymmetric parameters.

  2. The Herschel/HIFI unbiased spectral survey of the solar-mass protostar IRAS16293

    Science.gov (United States)

    Bottinelli, S.; Caux, E.; Cecarelli, C.; Kahane, C.

    2012-03-01

    Unbiased spectral surveys are powerful tools to study the chemistry and the physics of star forming regions, because they can provide a complete census of the molecular content and the observed lines probe the physical structure of the source. While unbiased surveys at the millimeter and sub-millimeter wavelengths observable from ground-based telescopes have previously been performed towards several high-mass protostars, very little data exist on low-mass protostars, with only one such ground-based survey carried out towards this kind of object. However, since low-mass protostars are believed to resemble our own Sun's progenitor, the information provided by spectral surveys is crucial in order to uncover the birth mechanisms of low-mass stars and hence of our Sun. To help fill up this gap in our understanding, we carried out an almost complete spectral survey towards the solar-type protostar IRAS16293-2422 with the HIFI instrument onboard Herschel. The observations covered a range of about 700 GHz, in which a few hundreds lines were detected with more than 3σ confidence interval certainty and identified. All the detected lines which were free from obvious blending effects were fitted with Gaussians to estimate their basic kinematic properties. Contrarily to what is observed in the millimeter range, no lines from complex organic molecules have been observed. In this work, we characterize the different components of IRAS16293-2422 (a known binary at least) by analyzing the numerous emission and absorption lines identified.

  3. Exposure to childhood adversity and deficits in emotion recognition: results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Crawford, Katherine M; Soare, Thomas W; Button, Katherine S; Raffeld, Miriam R; Smith, Andrew D A C; Penton-Voak, Ian S; Munafò, Marcus R

    2018-03-07

    Emotion recognition skills are essential for social communication. Deficits in these skills have been implicated in mental disorders. Prior studies of clinical and high-risk samples have consistently shown that children exposed to adversity are more likely than their unexposed peers to have emotion recognition skills deficits. However, only one population-based study has examined this association. We analyzed data from children participating in the Avon Longitudinal Study of Parents and Children, a prospective birth cohort (n = 6,506). We examined the association between eight adversities, assessed repeatedly from birth to age 8 (caregiver physical or emotional abuse; sexual or physical abuse; maternal psychopathology; one adult in the household; family instability; financial stress; parent legal problems; neighborhood disadvantage) and the ability to recognize facial displays of emotion measured using the faces subtest of the Diagnostic Assessment of Non-Verbal Accuracy (DANVA) at age 8.5 years. In addition to examining the role of exposure (vs. nonexposure) to each type of adversity, we also evaluated the role of the timing, duration, and recency of each adversity using a Least Angle Regression variable selection procedure. Over three-quarters of the sample experienced at least one adversity. We found no evidence to support an association between emotion recognition deficits and previous exposure to adversity, either in terms of total lifetime exposure, timing, duration, or recency, or when stratifying by sex. Results from the largest population-based sample suggest that even extreme forms of adversity are unrelated to emotion recognition deficits as measured by the DANVA, suggesting the possible immutability of emotion recognition in the general population. These findings emphasize the importance of population-based studies to generate generalizable results. © 2018 Association for Child and Adolescent Mental Health.

  4. Random Tagging Genotyping by Sequencing (rtGBS, an Unbiased Approach to Locate Restriction Enzyme Sites across the Target Genome.

    Directory of Open Access Journals (Sweden)

    Elena Hilario

    Full Text Available Genotyping by sequencing (GBS is a restriction enzyme based targeted approach developed to reduce the genome complexity and discover genetic markers when a priori sequence information is unavailable. Sufficient coverage at each locus is essential to distinguish heterozygous from homozygous sites accurately. The number of GBS samples able to be pooled in one sequencing lane is limited by the number of restriction sites present in the genome and the read depth required at each site per sample for accurate calling of single-nucleotide polymorphisms. Loci bias was observed using a slight modification of the Elshire et al.some restriction enzyme sites were represented in higher proportions while others were poorly represented or absent. This bias could be due to the quality of genomic DNA, the endonuclease and ligase reaction efficiency, the distance between restriction sites, the preferential amplification of small library restriction fragments, or bias towards cluster formation of small amplicons during the sequencing process. To overcome these issues, we have developed a GBS method based on randomly tagging genomic DNA (rtGBS. By randomly landing on the genome, we can, with less bias, find restriction sites that are far apart, and undetected by the standard GBS (stdGBS method. The study comprises two types of biological replicates: six different kiwifruit plants and two independent DNA extractions per plant; and three types of technical replicates: four samples of each DNA extraction, stdGBS vs. rtGBS methods, and two independent library amplifications, each sequenced in separate lanes. A statistically significant unbiased distribution of restriction fragment size by rtGBS showed that this method targeted 49% (39,145 of BamH I sites shared with the reference genome, compared to only 14% (11,513 by stdGBS.

  5. Empirical support for DSM-IV schizoaffective disorder: clinical and cognitive validators from a large patient sample.

    Science.gov (United States)

    DeRosse, Pamela; Burdick, Katherine E; Lencz, Todd; Siris, Samuel G; Malhotra, Anil K

    2013-01-01

    The diagnosis of schizoaffective disorder has long maintained an uncertain status in psychiatric nosology. Studies comparing clinical and biological features of patients with schizoaffective disorder to patients with related disorders [e.g., schizophrenia and bipolar disorder] can provide an evidence base for judging the validity of the diagnostic category. However, because most prior studies of schizoaffective disorder have only evaluated differences between groups at a static timepoint, it is unclear how these disorders may be related when the entire illness course is taken into consideration. We ascertained a large cohort [N = 993] of psychiatric patients with a range of psychotic diagnoses including schizophrenia with no history of major affective episodes [SZ-; N = 371], schizophrenia with a superimposed mood syndrome [SZ+; N = 224], schizoaffective disorder [SAD; N = 129] and bipolar I disorder with psychotic features [BPD+; N = 269]. Using cross-sectional data we designed key clinical and neurocognitive dependent measures that allowed us to test longitudinal hypotheses about the differences between these diagnostic entities. Large differences between diagnostic groups on several demographic and clinical variables were observed. Most notably, groups differed on a putative measure of cognitive decline. Specifically, the SAD group demonstrated significantly greater post-onset cognitive decline compared to the BP+ group, with the SZ- and SZ+ group both exhibiting levels of decline intermediate to BPD+ and SAD. These results suggest that schizoaffective disorder may possess distinct features. Contrary to earlier formulations, schizoaffective disorder may be a more severe form of illness.

  6. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  7. Heterogeneous asymmetric recombinase polymerase amplification (haRPA) for rapid hygiene control of large-volume water samples.

    Science.gov (United States)

    Elsäßer, Dennis; Ho, Johannes; Niessner, Reinhard; Tiehm, Andreas; Seidel, Michael

    2018-04-01

    Hygiene of drinking water is periodically controlled by cultivation and enumeration of indicator bacteria. Rapid and comprehensive measurements of emerging pathogens are of increasing interest to improve drinking water safety. In this study, the feasibility to detect bacteriophage PhiX174 as a potential indicator for virus contamination in large volumes of water is demonstrated. Three consecutive concentration methods (continuous ultrafiltration, monolithic adsorption filtration, and centrifugal ultrafiltration) were combined to concentrate phages stepwise from 1250 L drinking water into 1 mL. Heterogeneous asymmetric recombinase polymerase amplification (haRPA) is applied as rapid detection method. Field measurements were conducted to test the developed system for hygiene online monitoring under realistic conditions. We could show that this system allows the detection of artificial contaminations of bacteriophage PhiX174 in drinking water pipelines. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Empirical support for DSM-IV schizoaffective disorder: clinical and cognitive validators from a large patient sample.

    Directory of Open Access Journals (Sweden)

    Pamela DeRosse

    Full Text Available The diagnosis of schizoaffective disorder has long maintained an uncertain status in psychiatric nosology. Studies comparing clinical and biological features of patients with schizoaffective disorder to patients with related disorders [e.g., schizophrenia and bipolar disorder] can provide an evidence base for judging the validity of the diagnostic category. However, because most prior studies of schizoaffective disorder have only evaluated differences between groups at a static timepoint, it is unclear how these disorders may be related when the entire illness course is taken into consideration.We ascertained a large cohort [N = 993] of psychiatric patients with a range of psychotic diagnoses including schizophrenia with no history of major affective episodes [SZ-; N = 371], schizophrenia with a superimposed mood syndrome [SZ+; N = 224], schizoaffective disorder [SAD; N = 129] and bipolar I disorder with psychotic features [BPD+; N = 269]. Using cross-sectional data we designed key clinical and neurocognitive dependent measures that allowed us to test longitudinal hypotheses about the differences between these diagnostic entities.Large differences between diagnostic groups on several demographic and clinical variables were observed. Most notably, groups differed on a putative measure of cognitive decline. Specifically, the SAD group demonstrated significantly greater post-onset cognitive decline compared to the BP+ group, with the SZ- and SZ+ group both exhibiting levels of decline intermediate to BPD+ and SAD.These results suggest that schizoaffective disorder may possess distinct features. Contrary to earlier formulations, schizoaffective disorder may be a more severe form of illness.

  9. The Development and Validation of the Bergen–Yale Sex Addiction Scale With a Large National Sample

    Science.gov (United States)

    Andreassen, Cecilie S.; Pallesen, Ståle; Griffiths, Mark D.; Torsheim, Torbjørn; Sinha, Rajita

    2018-01-01

    The view that problematic excessive sexual behavior (“sex addiction”) is a form of behavioral addiction has gained more credence in recent years, but there is still considerable controversy regarding operationalization of the concept. Furthermore, most previous studies have relied on small clinical samples. The present study presents a new method for assessing sex addiction—the Bergen–Yale Sex Addiction Scale (BYSAS)—based on established addiction components (i.e., salience/craving, mood modification, tolerance, withdrawal, conflict/problems, and relapse/loss of control). Using a cross-sectional survey, the BYSAS was administered to a broad national sample of 23,533 Norwegian adults [aged 16–88 years; mean (± SD) age = 35.8 ± 13.3 years], together with validated measures of the Big Five personality traits, narcissism, self-esteem, and a measure of sexual addictive behavior. Both an exploratory and a confirmatory factor analysis (RMSEA = 0.046, CFI = 0.998, TLI = 0.996) supported a one-factor solution, although a local dependence between two items (Items 1 and 2) was detected. Furthermore, the scale had good internal consistency (Cronbach's α = 0.83). The BYSAS correlated significantly with the reference scale (r = 0.52), and demonstrated similar patterns of convergent and discriminant validity. The BYSAS was positively related to extroversion, neuroticism, intellect/imagination, and narcissism, and negatively related to conscientiousness, agreeableness, and self-esteem. High scores on the BYSAS were more prevalent among those who were men, single, of younger age, and with higher education. The BYSAS is a brief, and psychometrically reliable and valid measure for assessing sex addiction. However, further validation of the BYSAS is needed in other countries and contexts. PMID:29568277

  10. The Development and Validation of the Bergen–Yale Sex Addiction Scale With a Large National Sample

    Directory of Open Access Journals (Sweden)

    Cecilie S. Andreassen

    2018-03-01

    Full Text Available The view that problematic excessive sexual behavior (“sex addiction” is a form of behavioral addiction has gained more credence in recent years, but there is still considerable controversy regarding operationalization of the concept. Furthermore, most previous studies have relied on small clinical samples. The present study presents a new method for assessing sex addiction—the Bergen–Yale Sex Addiction Scale (BYSAS—based on established addiction components (i.e., salience/craving, mood modification, tolerance, withdrawal, conflict/problems, and relapse/loss of control. Using a cross-sectional survey, the BYSAS was administered to a broad national sample of 23,533 Norwegian adults [aged 16–88 years; mean (± SD age = 35.8 ± 13.3 years], together with validated measures of the Big Five personality traits, narcissism, self-esteem, and a measure of sexual addictive behavior. Both an exploratory and a confirmatory factor analysis (RMSEA = 0.046, CFI = 0.998, TLI = 0.996 supported a one-factor solution, although a local dependence between two items (Items 1 and 2 was detected. Furthermore, the scale had good internal consistency (Cronbach's α = 0.83. The BYSAS correlated significantly with the reference scale (r = 0.52, and demonstrated similar patterns of convergent and discriminant validity. The BYSAS was positively related to extroversion, neuroticism, intellect/imagination, and narcissism, and negatively related to conscientiousness, agreeableness, and self-esteem. High scores on the BYSAS were more prevalent among those who were men, single, of younger age, and with higher education. The BYSAS is a brief, and psychometrically reliable and valid measure for assessing sex addiction. However, further validation of the BYSAS is needed in other countries and contexts.

  11. The Development and Validation of the Bergen-Yale Sex Addiction Scale With a Large National Sample.

    Science.gov (United States)

    Andreassen, Cecilie S; Pallesen, Ståle; Griffiths, Mark D; Torsheim, Torbjørn; Sinha, Rajita

    2018-01-01

    The view that problematic excessive sexual behavior ("sex addiction") is a form of behavioral addiction has gained more credence in recent years, but there is still considerable controversy regarding operationalization of the concept. Furthermore, most previous studies have relied on small clinical samples. The present study presents a new method for assessing sex addiction-the Bergen-Yale Sex Addiction Scale (BYSAS)-based on established addiction components (i.e., salience/craving, mood modification, tolerance, withdrawal, conflict/problems, and relapse/loss of control). Using a cross-sectional survey, the BYSAS was administered to a broad national sample of 23,533 Norwegian adults [aged 16-88 years; mean (± SD ) age = 35.8 ± 13.3 years], together with validated measures of the Big Five personality traits, narcissism, self-esteem, and a measure of sexual addictive behavior. Both an exploratory and a confirmatory factor analysis (RMSEA = 0.046, CFI = 0.998, TLI = 0.996) supported a one-factor solution, although a local dependence between two items (Items 1 and 2) was detected. Furthermore, the scale had good internal consistency (Cronbach's α = 0.83). The BYSAS correlated significantly with the reference scale ( r = 0.52), and demonstrated similar patterns of convergent and discriminant validity. The BYSAS was positively related to extroversion, neuroticism, intellect/imagination, and narcissism, and negatively related to conscientiousness, agreeableness, and self-esteem. High scores on the BYSAS were more prevalent among those who were men, single, of younger age, and with higher education. The BYSAS is a brief, and psychometrically reliable and valid measure for assessing sex addiction. However, further validation of the BYSAS is needed in other countries and contexts.

  12. Clinical presentation of hyperthyroidism in a large representative sample of outpatients in France: relationships with age, aetiology and hormonal parameters.

    Science.gov (United States)

    Goichot, B; Caron, Ph; Landron, F; Bouée, S

    2016-03-01

    Signs and symptoms of thyrotoxicosis are not specific, and thyroid function tests are frequently prescribed to recognize such thyroid dysfunction. Ultrasensitive assays of thyroid-stimulating hormone (TSH) allow early diagnosis and identification of mild hyperthyroidism (generally designed as 'subclinical'). The aim of this study was to re-evaluate the clinical picture of thyrotoxicosis in the context of the current large utilization of ultrasensitive TSH assays. Prospective descriptive cohort. Clinical presentation of 1572 patients with a recent (symptoms, hormonal evaluation and treatment. A total of 1240 (78·9%) patients were women, mean age 48 ± 17 years. Subclinical hyperthyroidism (SCHT) was present in 86 patients (10·4%). Symptoms of thyrotoxicosis were in decreasing frequency order: palpitations, weakness, heat-related signs and disturbed sleep. A total of 64·9% of patients had lost weight. Signs and symptoms were more frequent in Graves' disease, in young patients, and were partially related to biochemical severity. Symptoms were less frequent in elderly patients except for cardiac manifestations (atrial fibrillation). Most patients with SCHT had one or several signs or symptoms of thyrotoxicosis. This study confirms that elderly patients have less symptoms of thyrotoxicosis than younger subjects but are at increased risk of cardiac complications. Our results show that most patients with 'subclinical' HT have in fact signs or symptoms of thyrotoxicosis. © 2015 John Wiley & Sons Ltd.

  13. DS86 neutron dose. Monte Carlo analysis for depth profile of {sup 152}Eu activity in a large stone sample

    Energy Technology Data Exchange (ETDEWEB)

    Endo, Satoru; Hoshi, Masaharu; Takada, Jun [Hiroshima Univ. (Japan). Research Inst. for Radiation Biology and Medicine; Iwatani, Kazuo; Oka, Takamitsu; Shizuma, Kiyoshi; Imanaka, Tetsuji; Fujita, Shoichiro; Hasai, Hiromi

    1999-06-01

    The depth profile of {sup 152}Eu activity induced in a large granite stone pillar by Hiroshima atomic bomb neutrons was calculated by a Monte Carlo N-Particle Transport Code (MCNP). The pillar was on the Motoyasu Bridge, located at a distance of 132 m (WSW) from the hypocenter. It was a square column with a horizontal sectional size of 82.5 cm x 82.5 cm and height of 179 cm. Twenty-one cells from the north to south surface at the central height of the column were specified for the calculation and {sup 152}Eu activities for each cell were calculated. The incident neutron spectrum was assumed to be the angular fluence data of the Dosimetry System 1986 (DS86). The angular dependence of the spectrum was taken into account by dividing the whole solid angle into twenty-six directions. The calculated depth profile of specific activity did not agree with the measured profile. A discrepancy was found in the absolute values at each depth with a mean multiplication factor of 0.58 and also in the shape of the relative profile. The results indicated that a reassessment of the neutron energy spectrum in DS86 is required for correct dose estimation. (author)

  14. Reconciling disparate prevalence rates of PTSD in large samples of US male Vietnam veterans and their controls

    Directory of Open Access Journals (Sweden)

    Gottesman Irving I

    2006-05-01

    Full Text Available Abstract Background Two large independent studies funded by the US government have assessed the impact of the Vietnam War on the prevalence of PTSD in US veterans. The National Vietnam Veterans Readjustment Study (NVVRS estimated the current PTSD prevalence to be 15.2% while the Vietnam Experience Study (VES estimated the prevalence to be 2.2%. We compared alternative criteria for estimating the prevalence of PTSD using the NVVRS and VES public use data sets collected more than 10 years after the United States withdrew troops from Vietnam. Methods We applied uniform diagnostic procedures to the male veterans from the NVVRS and VES to estimate PTSD prevalences based on varying criteria including one-month and lifetime prevalence estimates, combat and non-combat prevalence estimates, and prevalence estimates using both single and multiple indicator models. Results Using a narrow and specific set of criteria, we derived current prevalence estimates for combat-related PTSD of 2.5% and 2.9% for the VES and the NVVRS, respectively. Using a more broad and sensitive set of criteria, we derived current prevalence estimates for combat-related PTSD of 12.2% and 15.8% for the VES and NVVRS, respectively. Conclusion When comparable methods were applied to available data we reconciled disparate results and estimated similar current prevalences for both narrow and broad definitions of combat-related diagnoses of PTSD.

  15. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer's-Associated Aβ Oligomers.

    Directory of Open Access Journals (Sweden)

    Kyle C Wilcox

    Full Text Available Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs. AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs. This method gives a soluble membrane protein library (SMPL--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can

  16. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer’s-Associated Aβ Oligomers

    Science.gov (United States)

    Wilcox, Kyle C.; Marunde, Matthew R.; Das, Aditi; Velasco, Pauline T.; Kuhns, Benjamin D.; Marty, Michael T.; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G.; Klein, William L.

    2015-01-01

    Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer’s dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)—a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer’s model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug

  17. Nanoscale Synaptic Membrane Mimetic Allows Unbiased High Throughput Screen That Targets Binding Sites for Alzheimer's-Associated Aβ Oligomers.

    Science.gov (United States)

    Wilcox, Kyle C; Marunde, Matthew R; Das, Aditi; Velasco, Pauline T; Kuhns, Benjamin D; Marty, Michael T; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G; Klein, William L

    2015-01-01

    Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug discovery

  18. Robust and efficient direct multiplex amplification method for large-scale DNA detection of blood samples on FTA cards

    International Nuclear Information System (INIS)

    Jiang Bowei; Xiang Fawei; Zhao Xingchun; Wang Lihua; Fan Chunhai

    2013-01-01

    Deoxyribonucleic acid (DNA) damage arising from radiations widely occurred along with the development of nuclear weapons and clinically wide application of computed tomography (CT) scan and nuclear medicine. All ionizing radiations (X-rays, γ-rays, alpha particles, etc.) and ultraviolet (UV) radiation lead to the DNA damage. Polymerase chain reaction (PCR) is one of the most wildly used techniques for detecting DNA damage as the amplification stops at the site of the damage. Improvements to enhance the efficiency of PCR are always required and remain a great challenge. Here we establish a multiplex PCR assay system (MPAS) that is served as a robust and efficient method for direct detection of target DNA sequences in genomic DNA. The establishment of the system is performed by adding a combination of PCR enhancers to standard PCR buffer, The performance of MPAS was demonstrated by carrying out the direct PCR amplification on l.2 mm human blood punch using commercially available primer sets which include multiple primer pairs. The optimized PCR system resulted in high quality genotyping results without any inhibitory effect indicated and led to a full-profile success rate of 98.13%. Our studies demonstrate that the MPAS provides an efficient and robust method for obtaining sensitive, reliable and reproducible PCR results from human blood samples. (authors)

  19. Self-esteem development across the life span: a longitudinal study with a large sample from Germany.

    Science.gov (United States)

    Orth, Ulrich; Maes, Jürgen; Schmitt, Manfred

    2015-02-01

    The authors examined the development of self-esteem across the life span. Data came from a German longitudinal study with 3 assessments across 4 years of a sample of 2,509 individuals ages 14 to 89 years. The self-esteem measure used showed strong measurement invariance across assessments and birth cohorts. Latent growth curve analyses indicated that self-esteem follows a quadratic trajectory across the life span, increasing during adolescence, young adulthood, and middle adulthood, reaching a peak at age 60 years, and then declining in old age. No cohort effects on average levels of self-esteem or on the shape of the trajectory were found. Moreover, the trajectory did not differ across gender, level of education, or for individuals who had lived continuously in West versus East Germany (i.e., the 2 parts of Germany that had been separate states from 1949 to 1990). However, the results suggested that employment status, household income, and satisfaction in the domains of work, relationships, and health contribute to a more positive life span trajectory of self-esteem. The findings have significant implications, because they call attention to developmental stages in which individuals may be vulnerable because of low self-esteem (such as adolescence and old age) and to factors that predict successful versus problematic developmental trajectories. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  20. Structural validity and reliability of the Positive and Negative Affect Schedule (PANAS): evidence from a large Brazilian community sample.

    Science.gov (United States)

    Carvalho, Hudson W de; Andreoli, Sérgio B; Lara, Diogo R; Patrick, Christopher J; Quintana, Maria Inês; Bressan, Rodrigo A; Melo, Marcelo F de; Mari, Jair de J; Jorge, Miguel R

    2013-01-01

    Positive and negative affect are the two psychobiological-dispositional dimensions reflecting proneness to positive and negative activation that influence the extent to which individuals experience life events as joyful or as distressful. The Positive and Negative Affect Schedule (PANAS) is a structured questionnaire that provides independent indexes of positive and negative affect. This study aimed to validate a Brazilian interview-version of the PANAS by means of factor and internal consistency analysis. A representative community sample of 3,728 individuals residing in the cities of São Paulo and Rio de Janeiro, Brazil, voluntarily completed the PANAS. Exploratory structural equation model analysis was based on maximum likelihood estimation and reliability was calculated via Cronbach's alpha coefficient. Our results provide support for the hypothesis that the PANAS reliably measures two distinct dimensions of positive and negative affect. The structure and reliability of the Brazilian version of the PANAS are consistent with those of its original version. Taken together, these results attest the validity of the Brazilian adaptation of the instrument.

  1. Indoor NO/sub 2/ sampling in a large university campus in Benin city, southern Nigeria, using flames diffusion tubes

    International Nuclear Information System (INIS)

    Ukpebor, E.E.; Sadiku, Y.T.; Ahonkhai, S.I.

    2005-01-01

    Monitoring of NO/sub 2/ in different indoor environments (without cooking and with cooking using different fuels) was done. Flames diffusion tubes were used for the monitoring. The sampling duration was two weeks. The highest NO/sub 2/ concentration of 38.61 ppb (73.74 mug/m3) was monitored in the room where the cooking was done with a gas burner. This was followed by the room with firewood cooking, where the concentration was 36.75 ppb (70.19 mug/m3) and the least concentration of 24.05 ppb (46.80 mug/m3) was noted in the room, where kerosene stove was used for cooking. It is of significance to observe that the WHO annual average guideline value of 40 mug/m3 was exceeded in al the rooms where cooking was done. Levels obtained in this study, therefore, suggest a need for precautionary mitigation. However, the outdoor concentration of NO/sub 2/ was almost the same as that obtained indoors in the rooms without cooking. This suggests high penetration indoors of outdoor NO/sub 2/. A background level of 3.40 ppb (6.49 mug/m3) was established for the environment in Ugbowo, Benin City, Nigeria. (author)

  2. Problematic internet use and problematic online gaming are not the same: findings from a large nationally representative adolescent sample.

    Science.gov (United States)

    Király, Orsolya; Griffiths, Mark D; Urbán, Róbert; Farkas, Judit; Kökönyei, Gyöngyi; Elekes, Zsuzsanna; Tamás, Domokos; Demetrovics, Zsolt

    2014-12-01

    There is an ongoing debate in the literature whether problematic Internet use (PIU) and problematic online gaming (POG) are two distinct conceptual and nosological entities or whether they are the same. The present study contributes to this question by examining the interrelationship and the overlap between PIU and POG in terms of sex, school achievement, time spent using the Internet and/or online gaming, psychological well-being, and preferred online activities. Questionnaires assessing these variables were administered to a nationally representative sample of adolescent gamers (N=2,073; Mage=16.4 years, SD=0.87; 68.4% male). Data showed that Internet use was a common activity among adolescents, while online gaming was engaged in by a considerably smaller group. Similarly, more adolescents met the criteria for PIU than for POG, and a small group of adolescents showed symptoms of both problem behaviors. The most notable difference between the two problem behaviors was in terms of sex. POG was much more strongly associated with being male. Self-esteem had low effect sizes on both behaviors, while depressive symptoms were associated with both PIU and POG, affecting PIU slightly more. In terms of preferred online activities, PIU was positively associated with online gaming, online chatting, and social networking, while POG was only associated with online gaming. Based on our findings, POG appears to be a conceptually different behavior from PIU, and therefore the data support the notion that Internet Addiction Disorder and Internet Gaming Disorder are separate nosological entities.

  3. Who art thou? Personality predictors of artistic preferences in a large UK sample: the importance of openness.

    Science.gov (United States)

    Chamorro-Premuzic, Tomas; Reimers, Stian; Hsu, Anne; Ahmetoglu, Gorkan

    2009-08-01

    The present study examined individual differences in artistic preferences in a sample of 91,692 participants (60% women and 40% men), aged 13-90 years. Participants completed a Big Five personality inventory (Goldberg, 1999) and provided preference ratings for 24 different paintings corresponding to cubism, renaissance, impressionism, and Japanese art, which loaded on to a latent factor of overall art preferences. As expected, the personality trait openness to experience was the strongest and only consistent personality correlate of artistic preferences, affecting both overall and specific preferences, as well as visits to galleries, and artistic (rather than scientific) self-perception. Overall preferences were also positively influenced by age and visits to art galleries, and to a lesser degree, by artistic self-perception and conscientiousness (negatively). As for specific styles, after overall preferences were accounted for, more agreeable, more conscientious and less open individuals reported higher preference levels for impressionist, younger and more extraverted participants showed higher levels of preference for cubism (as did males), and younger participants, as well as males, reported higher levels of preferences for renaissance. Limitations and recommendations for future research are discussed.

  4. Superposition Enhanced Nested Sampling

    Directory of Open Access Journals (Sweden)

    Stefano Martiniani

    2014-08-01

    Full Text Available The theoretical analysis of many problems in physics, astronomy, and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: The probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here, we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling combines the strengths of global optimization with the unbiased or athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

  5. On the aspiration characteristics of large-diameter, thin-walled aerosol sampling probes at yaw orientations with respect to the wind

    International Nuclear Information System (INIS)

    Vincent, J.H.; Mark, D.; Smith, T.A.; Stevens, D.C.; Marshall, M.

    1986-01-01

    Experiments were carried out in a large wind tunnel to investigate the aspiration efficiencies of thin-walled aerosol sampling probes of large diameter (up to 50 mm) at orientations with respect to the wind direction ranging from 0 to 180 degrees. Sampling conditions ranged from sub-to super-isokinetic. The experiments employed test dusts of close-graded fused alumina and were conducted under conditions of controlled freestream turbulence. For orientations up to and including 90 degrees, the results were qualitatively and quantitatively consistent with a new physical model which takes account of the fact that the sampled air not only diverges or converges (depending on the relationship between wind speed and sampling velocity) but also turns to pass through the plane of the sampling orifice. The previously published results of Durham and Lundgren (1980) and Davies and Subari (1982) for smaller probes were also in good agreement with the new model. The model breaks down, however, for orientations greater than 90 degrees due to the increasing effect of particle impaction onto the blunt leading edge of the probe body. For the probe facing directly away from the wind (180 degree orientation), aspiration efficiency is dominated almost entirely by this effect. (author)

  6. Quantitative Prediction of Beef Quality Using Visible and NIR Spectroscopy with Large Data Samples Under Industry Conditions

    Science.gov (United States)

    Qiao, T.; Ren, J.; Craigie, C.; Zabalza, J.; Maltin, Ch.; Marshall, S.

    2015-03-01

    It is well known that the eating quality of beef has a significant influence on the repurchase behavior of consumers. There are several key factors that affect the perception of quality, including color, tenderness, juiciness, and flavor. To support consumer repurchase choices, there is a need for an objective measurement of quality that could be applied to meat prior to its sale. Objective approaches such as offered by spectral technologies may be useful, but the analytical algorithms used remain to be optimized. For visible and near infrared (VISNIR) spectroscopy, Partial Least Squares Regression (PLSR) is a widely used technique for meat related quality modeling and prediction. In this paper, a Support Vector Machine (SVM) based machine learning approach is presented to predict beef eating quality traits. Although SVM has been successfully used in various disciplines, it has not been applied extensively to the analysis of meat quality parameters. To this end, the performance of PLSR and SVM as tools for the analysis of meat tenderness is evaluated, using a large dataset acquired under industrial conditions. The spectral dataset was collected using VISNIR spectroscopy with the wavelength ranging from 350 to 1800 nm on 234 beef M. longissimus thoracis steaks from heifers, steers, and young bulls. As the dimensionality with the VISNIR data is very high (over 1600 spectral bands), the Principal Component Analysis (PCA) technique was applied for feature extraction and data reduction. The extracted principal components (less than 100) were then used for data modeling and prediction. The prediction results showed that SVM has a greater potential to predict beef eating quality than PLSR, especially for the prediction of tenderness. The infl uence of animal gender on beef quality prediction was also investigated, and it was found that beef quality traits were predicted most accurately in beef from young bulls.

  7. Maternal intrusiveness, family financial means, and anxiety across childhood in a large multiphase sample of community youth

    Science.gov (United States)

    Cooper-Vince, Christine E.; Pincus, Donna B.; Comer, Jonathan S.

    2013-01-01

    Intrusive parenting has been positively associated with child anxiety, although examinations of this relationship to date have been largely confined to middle to upper middle class families and have rarely used longitudinal designs. With several leading interventions for child anxiety emphasizing the reduction of parental intrusiveness, it is critical to determine whether the links between parental intrusiveness and child anxiety broadly apply to families of all financial means, and whether parental intrusiveness prospectively predicts the development of child anxiety. This study employed latent growth curve analysis to evaluate the interactive effects of maternal intrusiveness and financial means on the developmental trajectory of child anxiety from 1st grade to age 15 in 1,121 children (50.7% male) and their parents from the NICHD SECCYD. The overall model was found to provide good fit, revealing that early maternal intrusiveness and financial means did not impact individual trajectories of change in child anxiety, which were stable from 1st to 5th grade, and then decrease from 5th grade to age 15. Cross-sectional analyses also examined whether family financial means moderated contemporaneous relationships between maternal intrusiveness and child anxiety in 3rd and 5th grades. The relationship between maternal intrusiveness and child anxiety was moderated by family financial means for 1st graders, with stronger links found among children of lower family financial means, but not for 3rd and 5th graders. Neither maternal intrusiveness nor financial means in 1st grade predicted subsequent changes in anxiety across childhood. Findings help elucidate for whom and when maternal intrusiveness has the greatest link with child anxiety and can inform targeted treatment efforts. PMID:23929005

  8. Liver enzyme abnormalities in taking traditional herbal medicine in Korea: A retrospective large sample cohort study of musculoskeletal disorder patients.

    Science.gov (United States)

    Lee, Jinho; Shin, Joon-Shik; Kim, Me-Riong; Byun, Jang-Hoon; Lee, Seung-Yeol; Shin, Ye-Sle; Kim, Hyejin; Byung Park, Ki; Shin, Byung-Cheul; Lee, Myeong Soo; Ha, In-Hyuk

    2015-07-01

    The objective of this study is to report the incidence of liver injury from herbal medicine in musculoskeletal disease patients as large-scale studies are scarce. Considering that herbal medicine is frequently used in patients irrespective of liver function in Korea, we investigated the prevalence of liver injury by liver function test results in musculoskeletal disease patients. Of 32675 inpatients taking herbal medicine at 7 locations of a Korean medicine hospital between 2005 and 2013, we screened for liver injury in 6894 patients with liver function tests (LFTs) at admission and discharge. LFTs included t-bilirubin, AST, ALT, and ALP. Liver injury at discharge was assessed by LFT result classifications at admission (liver injury, liver function abnormality, and normal liver function). In analyses for risk factors of liver injury at discharge, we adjusted for age, sex, length of stay, conventional medicine intake, HBs antigen/antibody, and liver function at admission. A total 354 patients (prevalence 5.1%) had liver injury at admission, and 217 (3.1%) at discharge. Of the 354 patients with liver injury at admission, only 9 showed a clinically significant increase after herbal medicine intake, and 225 returned to within normal range or showed significant liver function recovery. Out of 4769 patients with normal liver function at admission, 27 (0.6%) had liver injury at discharge. In multivariate analyses for risk factors, younger age, liver function abnormality at admission, and HBs antigen positive were associated with injury at discharge. The prevalence of liver injury in patients with normal liver function taking herbal medicine for musculoskeletal disease was low, and herbal medicine did not exacerbate liver injury in most patients with injury prior to intake. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Undergraduate student drinking and related harms at an Australian university: web-based survey of a large random sample

    Directory of Open Access Journals (Sweden)

    Hallett Jonathan

    2012-01-01

    Full Text Available Abstract Background There is considerable interest in university student hazardous drinking among the media and policy makers. However there have been no population-based studies in Australia to date. We sought to estimate the prevalence and correlates of hazardous drinking and secondhand effects among undergraduates at a Western Australian university. Method We invited 13,000 randomly selected undergraduate students from a commuter university in Australia to participate in an online survey of university drinking. Responses were received from 7,237 students (56%, who served as participants in this study. Results Ninety percent had consumed alcohol in the last 12 months and 34% met criteria for hazardous drinking (AUDIT score ≥ 8 and greater than 6 standard drinks in one sitting in the previous month. Men and Australian/New Zealand residents had significantly increased odds (OR: 2.1; 95% CI: 1.9-2.3; OR: 5.2; 95% CI: 4.4-6.2 of being categorised as dependent (AUDIT score 20 or over than women and non-residents. In the previous 4 weeks, 13% of students had been insulted or humiliated and 6% had been pushed, hit or otherwise assaulted by others who were drinking. One percent of respondents had experienced sexual assault in this time period. Conclusions Half of men and over a third of women were drinking at hazardous levels and a relatively large proportion of students were negatively affected by their own and other students' drinking. There is a need for intervention to reduce hazardous drinking early in university participation. Trial registration ACTRN12608000104358

  10. Long-term resource variation and group size: A large-sample field test of the Resource Dispersion Hypothesis

    Directory of Open Access Journals (Sweden)

    Morecroft Michael D

    2001-07-01

    Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.

  11. Social communication and emotion difficulties and second to fourth digit ratio in a large community-based sample.

    Science.gov (United States)

    Barona, Manuela; Kothari, Radha; Skuse, David; Micali, Nadia

    2015-01-01

    Recent research investigating the extreme male brain theory of autism spectrum disorders (ASD) has drawn attention to the possibility that autistic type social difficulties may be associated with high prenatal testosterone exposure. This study aims to investigate the association between social communication and emotion recognition difficulties and second to fourth digit ratio (2D:4D) and circulating maternal testosterone during pregnancy in a large community-based cohort: the Avon Longitudinal Study of Parents and Children (ALSPAC). A secondary aim is to investigate possible gender differences in the associations. Data on social communication (Social and Communication Disorders Checklist, N = 7165), emotion recognition (emotional triangles, N = 5844 and diagnostics analysis of non-verbal accuracy, N = 7488) and 2D:4D (second to fourth digit ratio, N = 7159) were collected in childhood and early adolescence from questionnaires and face-to-face assessments. Complete data was available on 3515 children. Maternal circulating testosterone during pregnancy was available in a subsample of 89 children. Males had lower 2D:4D ratios than females [t (3513) = -9.775, p emotion recognition, and the lowest 10 % of 2D:4D ratios. A significant association was found between maternal circulating testosterone and left hand 2D:4D [OR = 1.65, 95 % CI 1.1-2.4, p < 0.01]. Previous findings on the association between 2D:4D and social communication difficulties were not confirmed. A novel association between an extreme measure of 2D:4D in males suggests threshold effects and warrants replication.

  12. Gaussian mixture modeling of hemispheric lateralization for language in a large sample of healthy individuals balanced for handedness.

    Science.gov (United States)

    Mazoyer, Bernard; Zago, Laure; Jobard, Gaël; Crivello, Fabrice; Joliot, Marc; Perchey, Guy; Mellet, Emmanuel; Petit, Laurent; Tzourio-Mazoyer, Nathalie

    2014-01-01

    Hemispheric lateralization for language production and its relationships with manual preference and manual preference strength were studied in a sample of 297 subjects, including 153 left-handers (LH). A hemispheric functional lateralization index (HFLI) for language was derived from fMRI acquired during a covert sentence generation task as compared with a covert word list recitation. The multimodal HFLI distribution was optimally modeled using a mixture of 3 and 4 Gaussian functions in right-handers (RH) and LH, respectively. Gaussian function parameters helped to define 3 types of language hemispheric lateralization, namely "Typical" (left hemisphere dominance with clear positive HFLI values, 88% of RH, 78% of LH), "Ambilateral" (no dominant hemisphere with HFLI values close to 0, 12% of RH, 15% of LH) and "Strongly-atypical" (right-hemisphere dominance with clear negative HFLI values, 7% of LH). Concordance between dominant hemispheres for hand and for language did not exceed chance level, and most of the association between handedness and language lateralization was explained by the fact that all Strongly-atypical individuals were left-handed. Similarly, most of the relationship between language lateralization and manual preference strength was explained by the fact that Strongly-atypical individuals exhibited a strong preference for their left hand. These results indicate that concordance of hemispheric dominance for hand and for language occurs barely above the chance level, except in a group of rare individuals (less than 1% in the general population) who exhibit strong right hemisphere dominance for both language and their preferred hand. They call for a revisit of models hypothesizing common determinants for handedness and for language dominance.

  13. Association between Childhood Obesity and Metabolic Syndrome: Evidence from a Large Sample of Chinese Children and Adolescents

    Science.gov (United States)

    Chen, Fangfang; Shan, Xiaoyi; Cheng, Hong; Hou, Dongqing; Zhao, Xiaoyuan; Wang, Tianyou; Zhao, Di

    2012-01-01

    Data about metabolic syndrome (MetS) in children is limited in China. We aimed to assess the prevalence of MetS related components, and their association with obesity. Data were collected as part of a representative study on MetS among 19593 children, aged 6–18 years old in Beijing. General obesity was assessed by body mass index (BMI) and central obesity by waist circumference. Finger capillary blood tests were used to assess triglyceride (TG), total cholesterol (TC) and impaired fasting glucose (IFG). Vein blood samples were collected from a subsample of 3814 children aged 10–18 years to classify MetS. MetS was defined according to the International Diabetes Federation 2007 definition. The associations between MetS related components and the degree and type of obesity were tested using logistic regression models. The prevalence of overweight, obesity, high blood pressure, elevated TG, TC and IFG were13.6%, 5.8%, 8.5%, 8.8%, 1.2% and 2.5%, respectively. Compared with normal weight children, overweight and obese children were more likely to have other MetS related components. In the subsample of 3814 children aged 10–18 years, the prevalence of MetS was much higher in obese subjects than in their normal weight counterparts (27.6% vs. 0.2%). Children with both general and central obesity had the highest prevalence of MetS. Compared with normal weight children, overweight and obese children were more likely to have MetS (overweight: OR = 67.33, 95%CI = 21.32–212.61; obesity: OR = 249.99, 95% CI = 79.51–785.98). Prevalence of MetS related components has reached high level among Beijing children who were overweight or obese. The association between metabolic disorders and obesity was strong. PMID:23082159

  14. Problematic Internet Use and Problematic Online Gaming Are Not the Same: Findings from a Large Nationally Representative Adolescent Sample

    Science.gov (United States)

    Griffiths, Mark D.; Urbán, Róbert; Farkas, Judit; Kökönyei, Gyöngyi; Elekes, Zsuzsanna; Tamás, Domokos; Demetrovics, Zsolt

    2014-01-01

    Abstract There is an ongoing debate in the literature whether problematic Internet use (PIU) and problematic online gaming (POG) are two distinct conceptual and nosological entities or whether they are the same. The present study contributes to this question by examining the interrelationship and the overlap between PIU and POG in terms of sex, school achievement, time spent using the Internet and/or online gaming, psychological well-being, and preferred online activities. Questionnaires assessing these variables were administered to a nationally representative sample of adolescent gamers (N=2,073; Mage=16.4 years, SD=0.87; 68.4% male). Data showed that Internet use was a common activity among adolescents, while online gaming was engaged in by a considerably smaller group. Similarly, more adolescents met the criteria for PIU than for POG, and a small group of adolescents showed symptoms of both problem behaviors. The most notable difference between the two problem behaviors was in terms of sex. POG was much more strongly associated with being male. Self-esteem had low effect sizes on both behaviors, while depressive symptoms were associated with both PIU and POG, affecting PIU slightly more. In terms of preferred online activities, PIU was positively associated with online gaming, online chatting, and social networking, while POG was only associated with online gaming. Based on our findings, POG appears to be a conceptually different behavior from PIU, and therefore the data support the notion that Internet Addiction Disorder and Internet Gaming Disorder are separate nosological entities. PMID:25415659

  15. Gaussian mixture modeling of hemispheric lateralization for language in a large sample of healthy individuals balanced for handedness.

    Directory of Open Access Journals (Sweden)

    Bernard Mazoyer

    Full Text Available Hemispheric lateralization for language production and its relationships with manual preference and manual preference strength were studied in a sample of 297 subjects, including 153 left-handers (LH. A hemispheric functional lateralization index (HFLI for language was derived from fMRI acquired during a covert sentence generation task as compared with a covert word list recitation. The multimodal HFLI distribution was optimally modeled using a mixture of 3 and 4 Gaussian functions in right-handers (RH and LH, respectively. Gaussian function parameters helped to define 3 types of language hemispheric lateralization, namely "Typical" (left hemisphere dominance with clear positive HFLI values, 88% of RH, 78% of LH, "Ambilateral" (no dominant hemisphere with HFLI values close to 0, 12% of RH, 15% of LH and "Strongly-atypical" (right-hemisphere dominance with clear negative HFLI values, 7% of LH. Concordance between dominant hemispheres for hand and for language did not exceed chance level, and most of the association between handedness and language lateralization was explained by the fact that all Strongly-atypical individuals were left-handed. Similarly, most of the relationship between language lateralization and manual preference strength was explained by the fact that Strongly-atypical individuals exhibited a strong preference for their left hand. These results indicate that concordance of hemispheric dominance for hand and for language occurs barely above the chance level, except in a group of rare individuals (less than 1% in the general population who exhibit strong right hemisphere dominance for both language and their preferred hand. They call for a revisit of models hypothesizing common determinants for handedness and for language dominance.

  16. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    Science.gov (United States)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  17. $b$-Tagging and Large Radius Jet Modelling in a $g\\rightarrow b\\bar{b}$ rich sample at ATLAS

    CERN Document Server

    Jiang, Zihao; The ATLAS collaboration

    2016-01-01

    Studies of b-tagging performance and jet properties in double b-tagged, large radius jets from sqrt(s)=8 TeV pp collisions recorded by the ATLAS detector at the LHC are presented. The double b-tag requirement yields a sample rich in high pT jets originating from the g->bb process. Using this sample, the performance of b-tagging and modelling of jet substructure variables at small b-quark angular separation is probed.

  18. Comprehensive biostatistical analysis of CpG island methylator phenotype in colorectal cancer using a large population-based sample.

    Directory of Open Access Journals (Sweden)

    Katsuhiko Nosho

    Full Text Available The CpG island methylator phenotype (CIMP is a distinct phenotype associated with microsatellite instability (MSI and BRAF mutation in colon cancer. Recent investigations have selected 5 promoters (CACNA1G, IGF2, NEUROG1, RUNX3 and SOCS1 as surrogate markers for CIMP-high. However, no study has comprehensively evaluated an expanded set of methylation markers (including these 5 markers using a large number of tumors, or deciphered the complex clinical and molecular associations with CIMP-high determined by the validated marker panel. METHOLODOLOGY/PRINCIPAL FINDINGS: DNA methylation at 16 CpG islands [the above 5 plus CDKN2A (p16, CHFR, CRABP1, HIC1, IGFBP3, MGMT, MINT1, MINT31, MLH1, p14 (CDKN2A/ARF and WRN] was quantified in 904 colorectal cancers by real-time PCR (MethyLight. In unsupervised hierarchical clustering analysis, the 5 markers (CACNA1G, IGF2, NEUROG1, RUNX3 and SOCS1, CDKN2A, CRABP1, MINT31, MLH1, p14 and WRN were generally clustered with each other and with MSI and BRAF mutation. KRAS mutation was not clustered with any methylation marker, suggesting its association with a random methylation pattern in CIMP-low tumors. Utilizing the validated CIMP marker panel (including the 5 markers, multivariate logistic regression demonstrated that CIMP-high was independently associated with older age, proximal location, poor differentiation, MSI-high, BRAF mutation, and inversely with LINE-1 hypomethylation and beta-catenin (CTNNB1 activation. Mucinous feature, signet ring cells, and p53-negativity were associated with CIMP-high in only univariate analysis. In stratified analyses, the relations of CIMP-high with poor differentiation, KRAS mutation and LINE-1 hypomethylation significantly differed according to MSI status.Our study provides valuable data for standardization of the use of CIMP-high-specific methylation markers. CIMP-high is independently associated with clinical and key molecular features in colorectal cancer. Our data also

  19. An examination of smoking behavior and opinions about smoke-free environments in a large sample of sexual and gender minority community members.

    Science.gov (United States)

    McElroy, Jane A; Everett, Kevin D; Zaniletti, Isabella

    2011-06-01

    The purpose of this study is to more completely quantify smoking rate and support for smoke-free policies in private and public environments from a large sample of self-identified sexual and gender minority (SGM) populations. A targeted sampling strategy recruited participants from 4 Missouri Pride Festivals and online surveys targeted to SGM populations during the summer of 2008. A 24-item survey gathered information on gender and sexual orientation, smoking status, and questions assessing behaviors and preferences related to smoke-free policies. The project recruited participants through Pride Festivals (n = 2,676) and Web-based surveys (n = 231) representing numerous sexual and gender orientations and the racial composite of the state of Missouri. Differences were found between the Pride Festivals sample and the Web-based sample, including smoking rates, with current smoking for the Web-based sample (22%) significantly less than the Pride Festivals sample (37%; p times more likely to be current smokers compared with the study's heterosexual group (n = 436; p = .005). Statistically fewer SGM racial minorities (33%) are current smokers compared with SGM Whites (37%; p = .04). Support and preferences for public and private smoke-free environments were generally low in the SGM population. The strategic targeting method achieved a large and diverse sample. The findings of high rates of smoking coupled with generally low levels of support for smoke-free public policies in the SGM community highlight the need for additional research to inform programmatic attempts to reduce tobacco use and increase support for smoke-free environments.

  20. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  1. Estimating Unbiased Treatment Effects in Education Using a Regression Discontinuity Design

    Directory of Open Access Journals (Sweden)

    William C. Smith

    2014-08-01

    Full Text Available The ability of regression discontinuity (RD designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education Statistics to meet the prerequisites of a causal relationship. Unfortunately, the statistical complexity of the RD design has limited its application in education research. This article provides a less technical introduction to RD for education researchers and practitioners. Using visual analysis to aide conceptual understanding, the article walks readers through the essential steps of a Sharp RD design using hypothetical, but realistic, district intervention data and provides additional resources for further exploration.

  2. Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures

    International Nuclear Information System (INIS)

    Procacci, Piero

    2015-01-01

    In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of only two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems

  3. High Efficient THz Emission From Unbiased and Biased Semiconductor Nanowires Fabricated Using Electron Beam Lithography

    Energy Technology Data Exchange (ETDEWEB)

    Balci, Soner; Czaplewski, David A.; Jung, Il Woong; Kim, Ju-Hyung; Hatami, Fariba; Kung, Patrick; Kim, Seongsin Margaret

    2017-07-01

    Besides having perfect control on structural features, such as vertical alignment and uniform distribution by fabricating the wires via e-beam lithography and etching process, we also investigated the THz emission from these fabricated nanowires when they are applied DC bias voltage. To be able to apply a voltage bias, an interdigitated gold (Au) electrode was patterned on the high-quality InGaAs epilayer grown on InP substrate bymolecular beam epitaxy. Afterwards, perfect vertically aligned and uniformly distributed nanowires were fabricated in between the electrodes of this interdigitated pattern so that we could apply voltage bias to improve the THz emission. As a result, we achieved enhancement in the emitted THz radiation by ~four times, about 12 dB increase in power ratio at 0.25 THz with a DC biased electric field compared with unbiased NWs.

  4. Prediction of Complex Human Traits Using the Genomic Best Linear Unbiased Predictor

    DEFF Research Database (Denmark)

    de los Campos, Gustavo; Vazquez, Ana I; Fernando, Rohan

    2013-01-01

    Despite important advances from Genome Wide Association Studies (GWAS), for most complex human traits and diseases, a sizable proportion of genetic variance remains unexplained and prediction accuracy (PA) is usually low. Evidence suggests that PA can be improved using Whole-Genome Regression (WGR......) models where phenotypes are regressed on hundreds of thousands of variants simultaneously. The Genomic Best Linear Unbiased Prediction G-BLUP, a ridge-regression type method) is a commonly used WGR method and has shown good predictive performance when applied to plant and animal breeding populations....... However, breeding and human populations differ greatly in a number of factors that can affect the predictive performance of G-BLUP. Using theory, simulations, and real data analysis, we study the erformance of G-BLUP when applied to data from related and unrelated human subjects. Under perfect linkage...

  5. Unbiased, complete solar charging of a neutral flow battery by a single Si photocathode

    DEFF Research Database (Denmark)

    Wedege, Kristina; Bae, Dowon; Dražević, Emil

    2018-01-01

    Solar redox flow batteries have attracted attention as a possible integrated technology for simultaneous conversion and storage of solar energy. In this work, we review current efforts to design aqueous solar flow batteries in terms of battery electrolyte capacity, solar conversion efficiency...... and depth of solar charge. From a materials cost and design perspective, a simple, cost-efficient, aqueous solar redox flow battery will most likely incorporate only one semiconductor, and we demonstrate here a system where a single photocathode is accurately matched to the redox couples to allow...... for a complete solar charge. The single TiO2 protected Si photocathode with a catalytic Pt layer can fully solar charge a neutral TEMPO-sulfate/ferricyanide battery with a cell voltage of 0.35 V. An unbiased solar conversion efficiency of 1.6% is obtained and this system represents a new strategy in solar RFBs...

  6. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    Science.gov (United States)

    van Dam, Wim; Howard, Mark

    2011-07-01

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiołkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.

  7. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    International Nuclear Information System (INIS)

    Dam, Wim van; Howard, Mark

    2011-01-01

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiolkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.

  8. Effects of large volume injection of aliphatic alcohols as sample diluents on the retention of low hydrophobic solutes in reversed-phase liquid chromatography.

    Science.gov (United States)

    David, Victor; Galaon, Toma; Aboul-Enein, Hassan Y

    2014-01-03

    Recent studies showed that injection of large volume of hydrophobic solvents used as sample diluents could be applied in reversed-phase liquid chromatography (RP-LC). This study reports a systematic research focused on the influence of a series of aliphatic alcohols (from methanol to 1-octanol) on the retention process in RP-LC, when large volumes of sample are injected on the column. Several model analytes with low hydrophobic character were studied by RP-LC process, for mobile phases containing methanol or acetonitrile as organic modifiers in different proportions with aqueous component. It was found that starting with 1-butanol, the aliphatic alcohols can be used as sample solvents and they can be injected in high volumes, but they may influence the retention factor and peak shape of the dissolved solutes. The dependence of the retention factor of the studied analytes on the injection volume of these alcohols is linear, with a decrease of its value as the sample volume is increased. The retention process in case of injecting up to 200μL of upper alcohols is dependent also on the content of the organic modifier (methanol or acetonitrile) in mobile phase. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Best linear unbiased prediction of genomic breeding values using a trait-specific marker-derived relationship matrix.

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    2010-09-01

    Full Text Available With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome, genetic merit of genotyped individuals can be predicted directly within the framework of mixed model equations, by using a matrix of relationships among individuals that is derived from the markers. Here we extend that approach by deriving a marker-based relationship matrix specifically for the trait of interest.In the framework of mixed model equations, a new best linear unbiased prediction (BLUP method including a trait-specific relationship matrix (TA was presented and termed TABLUP. The TA matrix was constructed on the basis of marker genotypes and their weights in relation to the trait of interest. A simulation study with 1,000 individuals as the training population and five successive generations as candidate population was carried out to validate the proposed method. The proposed TABLUP method outperformed the ridge regression BLUP (RRBLUP and BLUP with realized relationship matrix (GBLUP. It performed slightly worse than BayesB with an accuracy of 0.79 in the standard scenario.The proposed TABLUP method is an improvement of the RRBLUP and GBLUP method. It might be equivalent to the BayesB method but it has additional benefits like the calculation of accuracies for individual breeding values. The results also showed that the TA-matrix performs better in predicting ability than the classical numerator relationship matrix and the realized relationship matrix which are derived solely from pedigree or markers without regard to the trait. This is because the TA-matrix not only accounts for the Mendelian sampling term, but also puts the greater emphasis on those markers that explain more of the genetic variance in the trait.

  10. Detecting the Land-Cover Changes Induced by Large-Physical Disturbances Using Landscape Metrics, Spatial Sampling, Simulation and Spatial Analysis

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2009-08-01

    Full Text Available The objectives of the study are to integrate the conditional Latin Hypercube Sampling (cLHS, sequential Gaussian simulation (SGS and spatial analysis in remotely sensed images, to monitor the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial heterogeneity and variability. The multiple NDVI images demonstrate that spatial patterns of disturbed landscapes were successfully delineated by spatial analysis such as variogram, Moran’I and landscape metrics in the study area. The hybrid method delineates the spatial patterns and spatial variability of landscapes caused by these large disturbances. The cLHS approach is applied to select samples from Normalized Difference Vegetation Index (NDVI images from SPOT HRV images in the Chenyulan watershed of Taiwan, and then SGS with sufficient samples is used to generate maps of NDVI images. In final, the NDVI simulated maps are verified using indexes such as the correlation coefficient and mean absolute error (MAE. Therefore, the statistics and spatial structures of multiple NDVI images present a very robust behavior, which advocates the use of the index for the quantification of the landscape spatial patterns and land cover change. In addition, the results transferred by Open Geospatial techniques can be accessed from web-based and end-user applications of the watershed management.

  11. Solid phase extraction of large volume of water and beverage samples to improve detection limits for GC-MS analysis of bisphenol A and four other bisphenols.

    Science.gov (United States)

    Cao, Xu-Liang; Popovic, Svetlana

    2018-01-01

    Solid phase extraction (SPE) of large volumes of water and beverage products was investigated for the GC-MS analysis of bisphenol A (BPA), bisphenol AF (BPAF), bisphenol F (BPF), bisphenol E (BPE), and bisphenol B (BPB). While absolute recoveries of the method were improved for water and some beverage products (e.g. diet cola, iced tea), breakthrough may also have occurred during SPE of 200 mL of other beverages (e.g. BPF in cola). Improvements in method detection limits were observed with the analysis of large sample volumes for all bisphenols at ppt (pg/g) to sub-ppt levels. This improvement was found to be proportional to sample volumes for water and beverage products with less interferences and noise levels around the analytes. Matrix effects and interferences were observed during SPE of larger volumes (100 and 200 mL) of the beverage products, and affected the accurate analysis of BPF. This improved method was used to analyse bisphenols in various beverage samples, and only BPA was detected, with levels ranging from 0.022 to 0.030 ng/g for products in PET bottles, and 0.085 to 0.32 ng/g for products in cans.

  12. Implicit and explicit anti-fat bias among a large sample of medical doctors by BMI, race/ethnicity and gender.

    Directory of Open Access Journals (Sweden)

    Janice A Sabin

    Full Text Available Overweight patients report weight discrimination in health care settings and subsequent avoidance of routine preventive health care. The purpose of this study was to examine implicit and explicit attitudes about weight among a large group of medical doctors (MDs to determine the pervasiveness of negative attitudes about weight among MDs. Test-takers voluntarily accessed a public Web site, known as Project Implicit®, and opted to complete the Weight Implicit Association Test (IAT (N = 359,261. A sub-sample identified their highest level of education as MD (N = 2,284. Among the MDs, 55% were female, 78% reported their race as white, and 62% had a normal range BMI. This large sample of test-takers showed strong implicit anti-fat bias (Cohen's d = 1.0. MDs, on average, also showed strong implicit anti-fat bias (Cohen's d = 0.93. All test-takers and the MD sub-sample reported a strong preference for thin people rather than fat people or a strong explicit anti-fat bias. We conclude that strong implicit and explicit anti-fat bias is as pervasive among MDs as it is among the general public. An important area for future research is to investigate the association between providers' implicit and explicit attitudes about weight, patient reports of weight discrimination in health care, and quality of care delivered to overweight patients.

  13. Aldehyde-Selective Wacker-Type Oxidation of Unbiased Alkenes Enabled by a Nitrite Co-Catalyst

    KAUST Repository

    Wickens, Zachary K.; Morandi, Bill; Grubbs, Robert H.

    2013-01-01

    Breaking the rules: Reversal of the high Markovnikov selectivity of Wacker-type oxidations was accomplished using a nitrite co-catalyst. Unbiased aliphatic alkenes can be oxidized with high yield and aldehyde selectivity, and several functional groups are tolerated. 18O-labeling experiments indicate that the aldehydic O atom is derived from the nitrite salt.

  14. Aldehyde-Selective Wacker-Type Oxidation of Unbiased Alkenes Enabled by a Nitrite Co-Catalyst

    KAUST Repository

    Wickens, Zachary K.

    2013-09-13

    Breaking the rules: Reversal of the high Markovnikov selectivity of Wacker-type oxidations was accomplished using a nitrite co-catalyst. Unbiased aliphatic alkenes can be oxidized with high yield and aldehyde selectivity, and several functional groups are tolerated. 18O-labeling experiments indicate that the aldehydic O atom is derived from the nitrite salt.

  15. Unbiased minimum variance estimator of a matrix exponential function. Application to Boltzmann/Bateman coupled equations solving

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C. M.

    2009-01-01

    This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)

  16. Practical characterization of large networks using neighborhood information

    KAUST Repository

    Wang, Pinghui

    2018-02-14

    Characterizing large complex networks such as online social networks through node querying is a challenging task. Network service providers often impose severe constraints on the query rate, hence limiting the sample size to a small fraction of the total network of interest. Various ad hoc subgraph sampling methods have been proposed, but many of them give biased estimates and no theoretical basis on the accuracy. In this work, we focus on developing sampling methods for large networks where querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web APIs available on most major large networks for graph sampling. We show that our sampling method has provable convergence guarantees on being an unbiased estimator, and it is more accurate than state-of-the-art methods. We also explore methods to uncover shortest paths between a subset of nodes and detect high degree nodes by sampling only a small fraction of the network of interest. Our results demonstrate that utilizing neighborhood information yields methods that are two orders of magnitude faster than state-of-the-art methods.

  17. A large replication study and meta-analysis in European samples provides further support for association of AHI1 markers with schizophrenia

    DEFF Research Database (Denmark)

    Ingason, Andrés; Giegling, Ina; Cichon, Sven

    2010-01-01

    The Abelson helper integration site 1 (AHI1) gene locus on chromosome 6q23 is among a group of candidate loci for schizophrenia susceptibility that were initially identified by linkage followed by linkage disequilibrium mapping, and subsequent replication of the association in an independent sample....... Here, we present results of a replication study of AHI1 locus markers, previously implicated in schizophrenia, in a large European sample (in total 3907 affected and 7429 controls). Furthermore, we perform a meta-analysis of the implicated markers in 4496 affected and 18,920 controls. Both...... as the neighbouring phosphodiesterase 7B (PDE7B)-may be considered candidates for involvement in the genetic aetiology of schizophrenia....

  18. Dedicated Tool for Irradiation and Electrical Measurement of Large Surface Samples on the Beamline of a 2.5 Mev Pelletron Electron Accelerator: Application to Solar Cells

    OpenAIRE

    Lefèvre Jérémie; Le Houedec Patrice; Losco Jérôme; Cavani Olivier; Boizot Bruno

    2017-01-01

    We designed a tool allowing irradiation of large samples over a surface of A5 size dimension by means of a 2.5 MeV Pelletron electron accelerator. in situ electrical measurements (I-V, conductivity, etc.) can also be performed, in the dark or under illumination, to study radiation effects in materials. Irradiations and electrical measurements are achievable over a temperature range from 100 K to 300 K. The setup was initially developed to test real-size triple junction solar cells at low t...

  19. Detailed deposition density maps constructed by large-scale soil sampling for gamma-ray emitting radioactive nuclides from the Fukushima Dai-ichi Nuclear Power Plant accident.

    Science.gov (United States)

    Saito, Kimiaki; Tanihata, Isao; Fujiwara, Mamoru; Saito, Takashi; Shimoura, Susumu; Otsuka, Takaharu; Onda, Yuichi; Hoshi, Masaharu; Ikeuchi, Yoshihiro; Takahashi, Fumiaki; Kinouchi, Nobuyuki; Saegusa, Jun; Seki, Akiyuki; Takemiya, Hiroshi; Shibata, Tokushi

    2015-01-01

    Soil deposition density maps of gamma-ray emitting radioactive nuclides from the Fukushima Dai-ichi Nuclear Power Plant (NPP) accident were constructed on the basis of results from large-scale soil sampling. In total 10,915 soil samples were collected at 2168 locations. Gamma rays emitted from the samples were measured by Ge detectors and analyzed using a reliable unified method. The determined radioactivity was corrected to that of June 14, 2011 by considering the intrinsic decay constant of each nuclide. Finally the deposition maps were created for (134)Cs, (137)Cs, (131)I, (129m)Te and (110m)Ag. The radioactivity ratio of (134)Cs-(137)Cs was almost constant at 0.91 regardless of the locations of soil sampling. The radioactivity ratios of (131)I and (129m)Te-(137)Cs were relatively high in the regions south of the Fukushima NPP site. Effective doses for 50 y after the accident were evaluated for external and inhalation exposures due to the observed radioactive nuclides. The radiation doses from radioactive cesium were found to be much higher than those from the other radioactive nuclides. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. An unbiased expression screen for synaptogenic proteins identifies the LRRTM protein family as synaptic organizers.

    Science.gov (United States)

    Linhoff, Michael W; Laurén, Juha; Cassidy, Robert M; Dobie, Frederick A; Takahashi, Hideto; Nygaard, Haakon B; Airaksinen, Matti S; Strittmatter, Stephen M; Craig, Ann Marie

    2009-03-12

    Delineating the molecular basis of synapse development is crucial for understanding brain function. Cocultures of neurons with transfected fibroblasts have demonstrated the synapse-promoting activity of candidate molecules. Here, we performed an unbiased expression screen for synaptogenic proteins in the coculture assay using custom-made cDNA libraries. Reisolation of NGL-3/LRRC4B and neuroligin-2 accounts for a minority of positive clones, indicating that current understanding of mammalian synaptogenic proteins is incomplete. We identify LRRTM1 as a transmembrane protein that induces presynaptic differentiation in contacting axons. All four LRRTM family members exhibit synaptogenic activity, LRRTMs localize to excitatory synapses, and artificially induced clustering of LRRTMs mediates postsynaptic differentiation. We generate LRRTM1(-/-) mice and reveal altered distribution of the vesicular glutamate transporter VGLUT1, confirming an in vivo synaptic function. These results suggest a prevalence of LRR domain proteins in trans-synaptic signaling and provide a cellular basis for the reported linkage of LRRTM1 to handedness and schizophrenia.

  1. The role of fire in UK peatland and moorland management: the need for informed, unbiased debate.

    Science.gov (United States)

    Davies, G Matt; Kettridge, Nicholas; Stoof, Cathelijne R; Gray, Alan; Ascoli, Davide; Fernandes, Paulo M; Marrs, Rob; Allen, Katherine A; Doerr, Stefan H; Clay, Gareth D; McMorrow, Julia; Vandvik, Vigdis

    2016-06-05

    Fire has been used for centuries to generate and manage some of the UK's cultural landscapes. Despite its complex role in the ecology of UK peatlands and moorlands, there has been a trend of simplifying the narrative around burning to present it as an only ecologically damaging practice. That fire modifies peatland characteristics at a range of scales is clearly understood. Whether these changes are perceived as positive or negative depends upon how trade-offs are made between ecosystem services and the spatial and temporal scales of concern. Here we explore the complex interactions and trade-offs in peatland fire management, evaluating the benefits and costs of managed fire as they are currently understood. We highlight the need for (i) distinguishing between the impacts of fires occurring with differing severity and frequency, and (ii) improved characterization of ecosystem health that incorporates the response and recovery of peatlands to fire. We also explore how recent research has been contextualized within both scientific publications and the wider media and how this can influence non-specialist perceptions. We emphasize the need for an informed, unbiased debate on fire as an ecological management tool that is separated from other aspects of moorland management and from political and economic opinions.This article is part of the themed issue 'The interaction of fire and mankind'. © 2016 The Authors.

  2. Multifractals embedded in short time series: An unbiased estimation of probability moment

    Science.gov (United States)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  3. Mutually orthogonal Latin squares from the inner products of vectors in mutually unbiased bases

    International Nuclear Information System (INIS)

    Hall, Joanne L; Rao, Asha

    2010-01-01

    Mutually unbiased bases (MUBs) are important in quantum information theory. While constructions of complete sets of d + 1 MUBs in C d are known when d is a prime power, it is unknown if such complete sets exist in non-prime power dimensions. It has been conjectured that complete sets of MUBs only exist in C d if a maximal set of mutually orthogonal Latin squares (MOLS) of side length d also exists. There are several constructions (Roy and Scott 2007 J. Math. Phys. 48 072110; Paterek, Dakic and Brukner 2009 Phys. Rev. A 79 012109) of complete sets of MUBs from specific types of MOLS, which use Galois fields to construct the vectors of the MUBs. In this paper, two known constructions of MUBs (Alltop 1980 IEEE Trans. Inf. Theory 26 350-354; Wootters and Fields 1989 Ann. Phys. 191 363-381), both of which use polynomials over a Galois field, are used to construct complete sets of MOLS in the odd prime case. The MOLS come from the inner products of pairs of vectors in the MUBs.

  4. Unbiased estimation of the liver volume by the Cavalieri principle using magnetic resonance images

    International Nuclear Information System (INIS)

    Sahin, Buenyamin; Emirzeoglu, Mehmet; Uzun, Ahmet; Incesu, Luetfi; Bek, Yueksel; Bilgic, Sait; Kaplan, Sueleyman

    2003-01-01

    Objective: It is often useful to know the exact volume of the liver, such as in monitoring the effects of a disease, treatment, dieting regime, training program or surgical application. Some non-invasive methodologies have been previously described which estimate the volume of the liver. However, these preliminary techniques need special software or skilled performers and they are not ideal for daily use in clinical practice. Here, we describe a simple, accurate and practical technique for estimating liver volume without changing the routine magnetic resonance imaging scanning procedure. Materials and methods: In this study, five normal livers, obtained from cadavers, were scanned by 0.5 T MR machine, in horizontal and sagittal planes. The consecutive sections, in 10 mm thickness, were used to estimate the whole volume of the liver by means of the Cavalieri principle. The volume estimations were done by three different performers to evaluate the reproducibility. Results: There are no statistical differences between the performers and real liver volumes (P>0.05). There is also high correlation between the estimates of performers and the real liver volume (r=0.993). Conclusion: We conclude that the combination of MR imaging with the Cavalieri principle is a non-invasive, direct and unbiased technique that can be safely applied to estimate liver volume with a very moderate workload per individual

  5. Unraveling cognitive traits using the Morris water maze unbiased strategy classification (MUST-C) algorithm.

    Science.gov (United States)

    Illouz, Tomer; Madar, Ravit; Louzon, Yoram; Griffioen, Kathleen J; Okun, Eitan

    2016-02-01

    The assessment of spatial cognitive learning in rodents is a central approach in neuroscience, as it enables one to assess and quantify the effects of treatments and genetic manipulations from a broad perspective. Although the Morris water maze (MWM) is a well-validated paradigm for testing spatial learning abilities, manual categorization of performance in the MWM into behavioral strategies is subject to individual interpretation, and thus to biases. Here we offer a support vector machine (SVM) - based, automated, MWM unbiased strategy classification (MUST-C) algorithm, as well as a cognitive score scale. This model was examined and validated by analyzing data obtained from five MWM experiments with changing platform sizes, revealing a limitation in the spatial capacity of the hippocampus. We have further employed this algorithm to extract novel mechanistic insights on the impact of members of the Toll-like receptor pathway on cognitive spatial learning and memory. The MUST-C algorithm can greatly benefit MWM users as it provides a standardized method of strategy classification as well as a cognitive scoring scale, which cannot be derived from typical analysis of MWM data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Intramolecular Hydroamination of Unbiased and Functionalized Primary Aminoalkenes Catalyzed by a Rhodium Aminophosphine Complex

    Science.gov (United States)

    Julian, Lisa D.; Hartwig, John F.

    2010-01-01

    We report a rhodium catalyst that exhibits high reactivity for the hydroamination of primary aminoalkenes that are unbiased toward cyclization and that possess functional groups that would not be tolerated in hydroaminations catalyzed by more electrophilic systems. This catalyst contains an unusual diaminophosphine ligand that binds to rhodium in a κ3-P,O,P mode. The reactions catalyzed by this complex typically proceed at mild temperatures (room temperature to 70 °C), occur with primary aminoalkenes lacking substituents on the alkyl chain that bias the system toward cyclization, occur with primary aminoalkenes containing chloride, ester, ether, enolizable ketone, nitrile, and unprotected alcohol functionality, and occur with primary aminoalkenes containing internal olefins. Mechanistic data imply that these reactions occur with a turnover-limiting step that is different from that of reactions catalyzed by late transition metal complexes of Pd, Pt, and Ir. This change in the turnover-limiting step and resulting high activity of the catalyst stem from favorable relative rates for protonolysis of the M-C bond to release the hydroamination product vs reversion of the aminoalkyl intermediate to regenerate the acyclic precursor. Probes for the origin of the reactivity of the rhodium complex of L1 imply that the aminophosphine groups lead to these favorable rates by effects beyond steric demands and simple electron donation to the metal center. PMID:20839807

  7. Development of an unbiased statistical method for the analysis of unigenic evolution

    Directory of Open Access Journals (Sweden)

    Shilton Brian H

    2006-03-01

    Full Text Available Abstract Background Unigenic evolution is a powerful genetic strategy involving random mutagenesis of a single gene product to delineate functionally important domains of a protein. This method involves selection of variants of the protein which retain function, followed by statistical analysis comparing expected and observed mutation frequencies of each residue. Resultant mutability indices for each residue are averaged across a specified window of codons to identify hypomutable regions of the protein. As originally described, the effect of changes to the length of this averaging window was not fully eludicated. In addition, it was unclear when sufficient functional variants had been examined to conclude that residues conserved in all variants have important functional roles. Results We demonstrate that the length of averaging window dramatically affects identification of individual hypomutable regions and delineation of region boundaries. Accordingly, we devised a region-independent chi-square analysis that eliminates loss of information incurred during window averaging and removes the arbitrary assignment of window length. We also present a method to estimate the probability that conserved residues have not been mutated simply by chance. In addition, we describe an improved estimation of the expected mutation frequency. Conclusion Overall, these methods significantly extend the analysis of unigenic evolution data over existing methods to allow comprehensive, unbiased identification of domains and possibly even individual residues that are essential for protein function.

  8. SU{sub 2} nonstandard bases: the case of mutually unbiased bases

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, Albouy; Kibler, Maurice R. [Universite de Lyon, Institut de Physique Nucleaire de Lyon, Universite Lyon, CNRS/IN2P3, 43 bd du 11 novembre 1918, F-69622 Villeurbanne Cedex (France)

    2007-02-15

    This paper deals with bases in a finite-dimensional Hilbert space. Such a space can be realized as a subspace of the representation space of SU{sub 2} corresponding to an irreducible representation of SU{sub 2}. The representation theory of SU{sub 2} is reconsidered via the use of two truncated deformed oscillators. This leads to replace the familiar scheme [j{sub 2}, j{sub z}] by a scheme [j{sup 2}, v{sub ra}], where the two-parameter operator v{sub ra} is defined in the universal enveloping algebra of the Lie algebra su{sub 2}. The eigenvectors of the commuting set of operators [j{sup 2}, v{sub ra}] are adapted to a tower of chains SO{sub 3} includes C{sub 2j+1} (2j belongs to N{sup *}), where C{sub 2j+1} is the cyclic group of order 2j + 1. In the case where 2j + 1 is prime, the corresponding eigenvectors generate a complete set of mutually unbiased bases. Some useful relations on generalized quadratic Gauss sums are exposed in three appendices. (authors)

  9. The role of fire in UK peatland and moorland management: the need for informed, unbiased debate

    Science.gov (United States)

    Davies, G. Matt; Kettridge, Nicholas; Stoof, Cathelijne R.; Gray, Alan; Ascoli, Davide; Fernandes, Paulo M.; Marrs, Rob; Clay, Gareth D.; McMorrow, Julia; Vandvik, Vigdis

    2016-01-01

    Fire has been used for centuries to generate and manage some of the UK's cultural landscapes. Despite its complex role in the ecology of UK peatlands and moorlands, there has been a trend of simplifying the narrative around burning to present it as an only ecologically damaging practice. That fire modifies peatland characteristics at a range of scales is clearly understood. Whether these changes are perceived as positive or negative depends upon how trade-offs are made between ecosystem services and the spatial and temporal scales of concern. Here we explore the complex interactions and trade-offs in peatland fire management, evaluating the benefits and costs of managed fire as they are currently understood. We highlight the need for (i) distinguishing between the impacts of fires occurring with differing severity and frequency, and (ii) improved characterization of ecosystem health that incorporates the response and recovery of peatlands to fire. We also explore how recent research has been contextualized within both scientific publications and the wider media and how this can influence non-specialist perceptions. We emphasize the need for an informed, unbiased debate on fire as an ecological management tool that is separated from other aspects of moorland management and from political and economic opinions. This article is part of the themed issue ‘The interaction of fire and mankind’. PMID:27216512

  10. The Sex, Age, and Me study: recruitment and sampling for a large mixed-methods study of sexual health and relationships in an older Australian population.

    Science.gov (United States)

    Lyons, Anthony; Heywood, Wendy; Fileborn, Bianca; Minichiello, Victor; Barrett, Catherine; Brown, Graham; Hinchliff, Sharron; Malta, Sue; Crameri, Pauline

    2017-09-01

    Older people are often excluded from large studies of sexual health, as it is assumed that they are not having sex or are reluctant to talk about sensitive topics and are therefore difficult to recruit. We outline the sampling and recruitment strategies from a recent study on sexual health and relationships among older people. Sex, Age and Me was a nationwide Australian study that examined sexual health, relationship patterns, safer-sex practices and STI knowledge of Australians aged 60 years and over. The study used a mixed-methods approach to establish baseline levels of knowledge and to develop deeper insights into older adult's understandings and practices relating to sexual health. Data collection took place in 2015, with 2137 participants completing a quantitative survey and 53 participating in one-on-one semi-structured interviews. As the feasibility of this type of study has been largely untested until now, we provide detailed information on the study's recruitment strategies and methods. We also compare key characteristics of our sample with national estimates to assess its degree of representativeness. This study provides evidence to challenge the assumptions that older people will not take part in sexual health-related research and details a novel and successful way to recruit participants in this area.

  11. Further examination of embedded performance validity indicators for the Conners' Continuous Performance Test and Brief Test of Attention in a large outpatient clinical sample.

    Science.gov (United States)

    Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A

    2018-01-01

    Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.

  12. Association between subjective actual sleep duration, subjective sleep need, age, body mass index, and gender in a large sample of young adults.

    Science.gov (United States)

    Kalak, Nadeem; Brand, Serge; Beck, Johannes; Holsboer-Trachsler, Edith; Wollmer, M Axel

    2015-01-01

    Poor sleep is a major health concern, and there is evidence that young adults are at increased risk of suffering from poor sleep. There is also evidence that sleep duration can vary as a function of gender and body mass index (BMI). We sought to replicate these findings in a large sample of young adults, and also tested the hypothesis that a smaller gap between subjective sleep duration and subjective sleep need is associated with a greater feeling of being restored. A total of 2,929 university students (mean age 23.24±3.13 years, 69.1% female) took part in an Internet-based survey. They answered questions related to demographics and subjective sleep patterns. We found no gender differences in subjective sleep duration, subjective sleep need, BMI, age, or feeling of being restored. Nonlinear associations were observed between subjective sleep duration, BMI, and feeling of being restored. Moreover, a larger discrepancy between subjective actual sleep duration and subjective sleep need was associated with a lower feeling of being restored. The present pattern of results from a large sample of young adults suggests that males and females do not differ with respect to subjective sleep duration, BMI, or feeling of being restored. Moreover, nonlinear correlations seemed to provide a more accurate reflection of the relationship between subjective sleep and demographic variables.

  13. Enantioselective column coupled electrophoresis employing large bore capillaries hyphenated with tandem mass spectrometry for ultra-trace determination of chiral compounds in complex real samples.

    Science.gov (United States)

    Piešťanský, Juraj; Maráková, Katarína; Kovaľ, Marián; Havránek, Emil; Mikuš, Peter

    2015-12-01

    A new multidimensional analytical approach for the ultra-trace determination of target chiral compounds in unpretreated complex real samples was developed in this work. The proposed analytical system provided high orthogonality due to on-line combination of three different methods (separation mechanisms), i.e. (1) isotachophoresis (ITP), (2) chiral capillary zone electrophoresis (chiral CZE), and (3) triple quadrupole mass spectrometry (QqQ MS). The ITP step, performed in a large bore capillary (800 μm), was utilized for the effective sample pretreatment (preconcentration and matrix clean-up) in a large injection volume (1-10 μL) enabling to obtain as low as ca. 80 pg/mL limits of detection for the target enantiomers in urine matrices. In the chiral CZE step, the different chiral selectors (neutral, ionizable, and permanently charged cyclodextrins) and buffer systems were tested in terms of enantioselectivity and influence on the MS detection response. The performance parameters of the optimized ITP - chiral CZE-QqQ MS method were evaluated according to the FDA guidance for bioanalytical method validation. Successful validation and application (enantioselective monitoring of renally eliminated pheniramine and its metabolite in human urine) highlighted great potential of this chiral approach in advanced enantioselective biomedical applications. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Unbiased metabolite profiling by liquid chromatography-quadrupole time-of-flight mass spectrometry and multivariate data analysis for herbal authentication: classification of seven Lonicera species flower buds.

    Science.gov (United States)

    Gao, Wen; Yang, Hua; Qi, Lian-Wen; Liu, E-Hu; Ren, Mei-Ting; Yan, Yu-Ting; Chen, Jun; Li, Ping

    2012-07-06

    Plant-based medicines become increasingly popular over the world. Authentication of herbal raw materials is important to ensure their safety and efficacy. Some herbs belonging to closely related species but differing in medicinal properties are difficult to be identified because of similar morphological and microscopic characteristics. Chromatographic fingerprinting is an alternative method to distinguish them. Existing approaches do not allow a comprehensive analysis for herbal authentication. We have now developed a strategy consisting of (1) full metabolic profiling of herbal medicines by rapid resolution liquid chromatography (RRLC) combined with quadrupole time-of-flight mass spectrometry (QTOF MS), (2) global analysis of non-targeted compounds by molecular feature extraction algorithm, (3) multivariate statistical analysis for classification and prediction, and (4) marker compounds characterization. This approach has provided a fast and unbiased comparative multivariate analysis of the metabolite composition of 33-batch samples covering seven Lonicera species. Individual metabolic profiles are performed at the level of molecular fragments without prior structural assignment. In the entire set, the obtained classifier for seven Lonicera species flower buds showed good prediction performance and a total of 82 statistically different components were rapidly obtained by the strategy. The elemental compositions of discriminative metabolites were characterized by the accurate mass measurement of the pseudomolecular ions and their chemical types were assigned by the MS/MS spectra. The high-resolution, comprehensive and unbiased strategy for metabolite data analysis presented here is powerful and opens the new direction of authentication in herbal analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Large deviations in the presence of cooperativity and slow dynamics

    Science.gov (United States)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  16. Statistical searches for microlensing events in large, non-uniformly sampled time-domain surveys: A test using palomar transient factory data

    Energy Technology Data Exchange (ETDEWEB)

    Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)

    2014-01-20

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  17. Unbiased determination of the proton structure function F2p with faithful uncertainty estimation

    International Nuclear Information System (INIS)

    Del Debbio, Luigi; Forte, Stefano; Latorre, Jose I.; Rojo, Joan; Piccione, Andrea

    2005-01-01

    We construct a parametrization of the deep-inelastic structure function of the proton F 2 (x,Q 2 ) based on all available experimental information from charged lepton deep-inelastic scattering experiments. The parametrization effectively provides a bias-free determination of the probability measure in the space of structure functions, which retains information on experimental errors and correlations. The result is obtained in the form of a Monte Carlo sample of neural networks trained on an ensemble of replicas of the experimental data. We discuss in detail the techniques required for the construction of bias-free parameterizations of large amounts of structure function data, in view of future applications to the determination of parton distributions based on the same method. (author)

  18. A propidium monoazide–quantitative PCR method for the detection and quantification of viable Enterococcus faecalis in large-volume samples of marine waters

    KAUST Repository

    Salam, Khaled W.; El-Fadel, Mutasem E.; Barbour, Elie K.; Saikaly, Pascal

    2014-01-01

    The development of rapid detection assays of cell viability is essential for monitoring the microbiological quality of water systems. Coupling propidium monoazide with quantitative PCR (PMA-qPCR) has been successfully applied in different studies for the detection and quantification of viable cells in small-volume samples (0.25-1.00 mL), but it has not been evaluated sufficiently in marine environments or in large-volume samples. In this study, we successfully integrated blue light-emitting diodes for photoactivating PMA and membrane filtration into the PMA-qPCR assay for the rapid detection and quantification of viable Enterococcus faecalis cells in 10-mL samples of marine waters. The assay was optimized in phosphate-buffered saline and seawater, reducing the qPCR signal of heat-killed E. faecalis cells by 4 log10 and 3 log10 units, respectively. Results suggest that high total dissolved solid concentration (32 g/L) in seawater can reduce PMA activity. Optimal PMA-qPCR standard curves with a 6-log dynamic range and detection limit of 102 cells/mL were generated for quantifying viable E. faecalis cells in marine waters. The developed assay was compared with the standard membrane filter (MF) method by quantifying viable E. faecalis cells in seawater samples exposed to solar radiation. The results of the developed PMA-qPCR assay did not match that of the standard MF method. This difference in the results reflects the different physiological states of E. faecalis cells in seawater. In conclusion, the developed assay is a rapid (∼5 h) method for the quantification of viable E. faecalis cells in marine recreational waters, which should be further improved and tested in different seawater settings. © 2014 Springer-Verlag Berlin Heidelberg.

  19. A propidium monoazide–quantitative PCR method for the detection and quantification of viable Enterococcus faecalis in large-volume samples of marine waters

    KAUST Repository

    Salam, Khaled W.

    2014-08-23

    The development of rapid detection assays of cell viability is essential for monitoring the microbiological quality of water systems. Coupling propidium monoazide with quantitative PCR (PMA-qPCR) has been successfully applied in different studies for the detection and quantification of viable cells in small-volume samples (0.25-1.00 mL), but it has not been evaluated sufficiently in marine environments or in large-volume samples. In this study, we successfully integrated blue light-emitting diodes for photoactivating PMA and membrane filtration into the PMA-qPCR assay for the rapid detection and quantification of viable Enterococcus faecalis cells in 10-mL samples of marine waters. The assay was optimized in phosphate-buffered saline and seawater, reducing the qPCR signal of heat-killed E. faecalis cells by 4 log10 and 3 log10 units, respectively. Results suggest that high total dissolved solid concentration (32 g/L) in seawater can reduce PMA activity. Optimal PMA-qPCR standard curves with a 6-log dynamic range and detection limit of 102 cells/mL were generated for quantifying viable E. faecalis cells in marine waters. The developed assay was compared with the standard membrane filter (MF) method by quantifying viable E. faecalis cells in seawater samples exposed to solar radiation. The results of the developed PMA-qPCR assay did not match that of the standard MF method. This difference in the results reflects the different physiological states of E. faecalis cells in seawater. In conclusion, the developed assay is a rapid (∼5 h) method for the quantification of viable E. faecalis cells in marine recreational waters, which should be further improved and tested in different seawater settings. © 2014 Springer-Verlag Berlin Heidelberg.

  20. A large-scale investigation of the quality of groundwater in six major districts of Central India during the 2010-2011 sampling campaign.

    Science.gov (United States)

    Khare, Peeyush

    2017-09-01

    This paper investigates the groundwater quality in six major districts of Madhya Pradesh in central India, namely, Balaghat, Chhindwara, Dhar, Jhabua, Mandla, and Seoni during the 2010-2011 sampling campaign, and discusses improvements made in the supplied water quality between the years 2011 and 2017. Groundwater is the main source of water for a combined rural population of over 7 million in these districts. Its contamination could have a huge impact on public health. We analyzed the data collected from a large-scale water sampling campaign carried out by the Public Health Engineering Department (PHED), Government of Madhya Pradesh between 2010 and 2011 during which all rural tube wells and dug wells were sampled in these six districts. Eight hundred thirty-one dug wells and 47,606 tube wells were sampled in total and were analyzed for turbidity, hardness, iron, nitrate, fluoride, chloride, and sulfate ion concentrations. Our study found water in 21 out of the 228 dug wells in Chhindwara district unfit for drinking due to fluoride contamination while all dug wells in Balaghat had fluoride within the permissible limit. Twenty-six of the 56 dug wells and 4825 of the 9390 tube wells in Dhar district exceeded the permissible limit for nitrate while 100% dug wells in Balaghat, Seoni, and Chhindwara had low levels of nitrate. Twenty-four of the 228 dug wells and 1669 of 6790 tube wells in Chhindwara had high iron concentration. The median pH value in both dug wells and tube wells varied between 6 and 8 in all six districts. Still, a significant number of tube wells exceeded a pH of 8.5 especially in Mandla and Seoni districts. In conclusion, this study shows that parts of inhabited rural Madhya Pradesh were potentially exposed to contaminated subsurface water during 2010-2011. The analysis has been correlated with rural health survey results wherever available to estimate the visible impact. We next highlight that the quality of drinking water has enormously improved

  1. Unbiased and non-supervised learning methods for disruption prediction at JET

    International Nuclear Information System (INIS)

    Murari, A.; Vega, J.; Ratta, G.A.; Vagliasindi, G.; Johnson, M.F.; Hong, S.H.

    2009-01-01

    The importance of predicting the occurrence of disruptions is going to increase significantly in the next generation of tokamak devices. The expected energy content of ITER plasmas, for example, is such that disruptions could have a significant detrimental impact on various parts of the device, ranging from erosion of plasma facing components to structural damage. Early detection of disruptions is therefore needed with evermore increasing urgency. In this paper, the results of a series of methods to predict disruptions at JET are reported. The main objective of the investigation consists of trying to determine how early before a disruption it is possible to perform acceptable predictions on the basis of the raw data, keeping to a minimum the number of 'ad hoc' hypotheses. Therefore, the chosen learning techniques have the common characteristic of requiring a minimum number of assumptions. Classification and Regression Trees (CART) is a supervised but, on the other hand, a completely unbiased and nonlinear method, since it simply constructs the best classification tree by working directly on the input data. A series of unsupervised techniques, mainly K-means and hierarchical, have also been tested, to investigate to what extent they can autonomously distinguish between disruptive and non-disruptive groups of discharges. All these independent methods indicate that, in general, prediction with a success rate above 80% can be achieved not earlier than 180 ms before the disruption. The agreement between various completely independent methods increases the confidence in the results, which are also confirmed by a visual inspection of the data performed with pseudo Grand Tour algorithms.

  2. The relationship between the Five-Factor Model personality traits and peptic ulcer disease in a large population-based adult sample.

    Science.gov (United States)

    Realo, Anu; Teras, Andero; Kööts-Ausmees, Liisi; Esko, Tõnu; Metspalu, Andres; Allik, Jüri

    2015-12-01

    The current study examined the relationship between the Five-Factor Model personality traits and physician-confirmed peptic ulcer disease (PUD) diagnosis in a large population-based adult sample, controlling for the relevant behavioral and sociodemographic factors. Personality traits were assessed by participants themselves and by knowledgeable informants using the NEO Personality Inventory-3 (NEO PI-3). When controlling for age, sex, education, and cigarette smoking, only one of the five NEO PI-3 domain scales - higher Neuroticism - and two facet scales - lower A1: Trust and higher C1: Competence - made a small, yet significant contribution (p personality traits that are associated with the diagnosis of PUD at a particular point in time. Further prospective studies with a longitudinal design and multiple assessments would be needed to fully understand if the FFM personality traits serve as risk factors for the development of PUD. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  3. Large area gridded ionisation chamber and electrostatic precipitator and their application to low-level alpha-spectrometry of environmental air samples

    International Nuclear Information System (INIS)

    Hoetzl, H.; Winkler, R.

    1977-01-01

    A high-resolution, parallel plate Frisch grid ionization chamber with an efficient area of 3000 cm 2 , and a large area electrostatic precipitator were developed and applied to direct alpha spectrometry of air dust. Using an argon-methane mixture (P-10 gas) at atmospheric pressure the resolution of the detector system is 22 keV FWHM at 5 MeV. After sampling for one week and decay of short-lived natural activity, the sensitivity of the procedure for long-lived alpha emitters is about 0.1 fCi/m 3 taking 3 Σσ of background as the detection limit with 1000 min counting time. (author)

  4. An Example of an Improvable Rao-Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator.

    Science.gov (United States)

    Galili, Tal; Meilijson, Isaac

    2016-01-02

    The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

  5. Absorption and folding of melittin onto lipid bilayer membranes via unbiased atomic detail microsecond molecular dynamics simulation.

    Science.gov (United States)

    Chen, Charles H; Wiedman, Gregory; Khan, Ayesha; Ulmschneider, Martin B

    2014-09-01

    Unbiased molecular simulation is a powerful tool to study the atomic details driving functional structural changes or folding pathways of highly fluid systems, which present great challenges experimentally. Here we apply unbiased long-timescale molecular dynamics simulation to study the ab initio folding and partitioning of melittin, a template amphiphilic membrane active peptide. The simulations reveal that the peptide binds strongly to the lipid bilayer in an unstructured configuration. Interfacial folding results in a localized bilayer deformation. Akin to purely hydrophobic transmembrane segments the surface bound native helical conformer is highly resistant against thermal denaturation. Circular dichroism spectroscopy experiments confirm the strong binding and thermostability of the peptide. The study highlights the utility of molecular dynamics simulations for studying transient mechanisms in fluid lipid bilayer systems. This article is part of a Special Issue entitled: Interfacially Active Peptides and Proteins. Guest Editors: William C. Wimley and Kalina Hristova. Copyright © 2014. Published by Elsevier B.V.

  6. Unbiased stereological estimation of d-dimensional volume in Rn from an isotropic random slice through a fixed point

    DEFF Research Database (Denmark)

    Jensen, Eva B. Vedel; Kiêu, K

    1994-01-01

    Unbiased stereological estimators of d-dimensional volume in R(n) are derived, based on information from an isotropic random r-slice through a specified point. The content of the slice can be subsampled by means of a spatial grid. The estimators depend only on spatial distances. As a fundamental ...... lemma, an explicit formula for the probability that an isotropic random r-slice in R(n) through 0 hits a fixed point in R(n) is given....

  7. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    Science.gov (United States)

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  8. Validation of the MOS Social Support Survey 6-item (MOS-SSS-6) measure with two large population-based samples of Australian women.

    Science.gov (United States)

    Holden, Libby; Lee, Christina; Hockey, Richard; Ware, Robert S; Dobson, Annette J

    2014-12-01

    This study aimed to validate a 6-item 1-factor global measure of social support developed from the Medical Outcomes Study Social Support Survey (MOS-SSS) for use in large epidemiological studies. Data were obtained from two large population-based samples of participants in the Australian Longitudinal Study on Women's Health. The two cohorts were aged 53-58 and 28-33 years at data collection (N = 10,616 and 8,977, respectively). Items selected for the 6-item 1-factor measure were derived from the factor structure obtained from unpublished work using an earlier wave of data from one of these cohorts. Descriptive statistics, including polychoric correlations, were used to describe the abbreviated scale. Cronbach's alpha was used to assess internal consistency and confirmatory factor analysis to assess scale validity. Concurrent validity was assessed using correlations between the new 6-item version and established 19-item version, and other concurrent variables. In both cohorts, the new 6-item 1-factor measure showed strong internal consistency and scale reliability. It had excellent goodness-of-fit indices, similar to those of the established 19-item measure. Both versions correlated similarly with concurrent measures. The 6-item 1-factor MOS-SSS measures global functional social support with fewer items than the established 19-item measure.

  9. Unbiased Scanning Method and Data Banking Approach Using Ultra-High Performance Liquid Chromatography Coupled with High-Resolution Mass Spectrometry for Quantitative Comparison of Metabolite Exposure in Plasma across Species Analyzed at Different Dates.

    Science.gov (United States)

    Gao, Hongying; Deng, Shibing; Obach, R Scott

    2015-12-01

    An unbiased scanning methodology using ultra high-performance liquid chromatography coupled with high-resolution mass spectrometry was used to bank data and plasma samples for comparing the data generated at different dates. This method was applied to bank the data generated earlier in animal samples and then to compare the exposure to metabolites in animal versus human for safety assessment. With neither authentic standards nor prior knowledge of the identities and structures of metabolites, full scans for precursor ions and all ion fragments (AIF) were employed with a generic gradient LC method to analyze plasma samples at positive and negative polarity, respectively. In a total of 22 tested drugs and metabolites, 21 analytes were detected using this unbiased scanning method except that naproxen was not detected due to low sensitivity at negative polarity and interference at positive polarity; and 4'- or 5-hydroxy diclofenac was not separated by a generic UPLC method. Statistical analysis of the peak area ratios of the analytes versus the internal standard in five repetitive analyses over approximately 1 year demonstrated that the analysis variation was significantly different from sample instability. The confidence limits for comparing the exposure using peak area ratio of metabolites in animal plasma versus human plasma measured over approximately 1 year apart were comparable to the analysis undertaken side by side on the same days. These statistical analysis results showed it was feasible to compare data generated at different dates with neither authentic standards nor prior knowledge of the analytes.

  10. Optimum method to determine radioactivity in large tracts of land. In-situ gamma spectroscopy or sampling followed by laboratory measurement

    International Nuclear Information System (INIS)

    Bronson, Frazier

    2008-01-01

    In the process of decommissioning contaminated facilities, and in the conduct of normal operations involving radioactive material, it is frequently required to show that large areas of land are not contaminated, or if contaminated that the amount is below an acceptable level. However, it is quite rare for the radioactivity in the soil to be uniformly distributed. Rather it is generally in a few isolated and probably unknown locations. One way to ascertain the status of the land concentration is to take soil samples for subsequent measurement in the laboratory. Another way is to use in-situ gamma spectroscopy. In both cases, the non-uniform distribution of radioactivity can greatly compromise the accuracy of the assay, and makes uncertainty estimates much more complicated than simple propagation of counting statistics. This paper examines the process of determining the best way to estimate the activity on the tract of land, and gives quantitative estimates of measurement uncertainty for various conditions of radioactivity. When the distribution of radioactivity in the soil is not homogeneous, the sampling uncertainty is likely to be larger than the in-situ measurement uncertainty. (author)

  11. The short-form version of the Depression Anxiety Stress Scales (DASS-21): construct validity and normative data in a large non-clinical sample.

    Science.gov (United States)

    Henry, Julie D; Crawford, John R

    2005-06-01

    To test the construct validity of the short-form version of the Depression anxiety and stress scale (DASS-21), and in particular, to assess whether stress as indexed by this measure is synonymous with negative affectivity (NA) or whether it represents a related, but distinct, construct. To provide normative data for the general adult population. Cross-sectional, correlational and confirmatory factor analysis (CFA). The DASS-21 was administered to a non-clinical sample, broadly representative of the general adult UK population (N = 1,794). Competing models of the latent structure of the DASS-21 were evaluated using CFA. The model with optimal fit (RCFI = 0.94) had a quadripartite structure, and consisted of a general factor of psychological distress plus orthogonal specific factors of depression, anxiety, and stress. This model was a significantly better fit than a competing model that tested the possibility that the Stress scale simply measures NA. The DASS-21 subscales can validly be used to measure the dimensions of depression, anxiety, and stress. However, each of these subscales also taps a more general dimension of psychological distress or NA. The utility of the measure is enhanced by the provision of normative data based on a large sample.

  12. Comparison of Health Risks and Changes in Risks over Time Among a Sample of Lesbian, Gay, Bisexual, and Heterosexual Employees at a Large Firm.

    Science.gov (United States)

    Mitchell, Rebecca J; Ozminkowski, Ronald J

    2017-04-01

    The objective of this study was to estimate the prevalence of health risk factors by sexual orientation over a 4-year period within a sample of employees from a large firm. Propensity score-weighted generalized linear regression models were used to estimate the proportion of employees at high risk for health problems in each year and over time, controlling for many factors. Analyses were conducted with 6 study samples based on sex and sexual orientation. Rates of smoking, stress, and certain other health risk factors were higher for lesbian, gay, and bisexual (LGB) employees compared with rates of these risks among straight employees. Lesbian, gay, and straight employees successfully reduced risk levels in many areas. Significant reductions were realized for the proportion at risk for high stress and low life satisfaction among gay and lesbian employees, and for the proportion of smokers among gay males. Comparing changes over time for sexual orientation groups versus other employee groups showed that improvements and reductions in risk levels for most health risk factors examined occurred at similar rates among individuals employed by this firm, regardless of sexual orientation. These results can help improve understanding of LGB health and provide information on where to focus workplace health promotion efforts to meet the health needs of LGB employees.

  13. Does developmental timing of exposure to child maltreatment predict memory performance in adulthood? Results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Busso, Daniel S; Raffeld, Miriam R; Smoller, Jordan W; Nelson, Charles A; Doyle, Alysa E; Luk, Gigi

    2016-01-01

    Although maltreatment is a known risk factor for multiple adverse outcomes across the lifespan, its effects on cognitive development, especially memory, are poorly understood. Using data from a large, nationally representative sample of young adults (Add Health), we examined the effects of physical and sexual abuse on working and short-term memory in adulthood. We examined the association between exposure to maltreatment as well as its timing of first onset after adjusting for covariates. Of our sample, 16.50% of respondents were exposed to physical abuse and 4.36% to sexual abuse by age 17. An analysis comparing unexposed respondents to those exposed to physical or sexual abuse did not yield any significant differences in adult memory performance. However, two developmental time periods emerged as important for shaping memory following exposure to sexual abuse, but in opposite ways. Relative to non-exposed respondents, those exposed to sexual abuse during early childhood (ages 3-5), had better number recall and those first exposed during adolescence (ages 14-17) had worse number recall. However, other variables, including socioeconomic status, played a larger role (than maltreatment) on working and short-term memory. We conclude that a simple examination of "exposed" versus "unexposed" respondents may obscure potentially important within-group differences that are revealed by examining the effects of age at onset to maltreatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Cross-validation of the Dot Counting Test in a large sample of credible and non-credible patients referred for neuropsychological testing.

    Science.gov (United States)

    McCaul, Courtney; Boone, Kyle B; Ermshar, Annette; Cottingham, Maria; Victor, Tara L; Ziegler, Elizabeth; Zeller, Michelle A; Wright, Matthew

    2018-01-18

    To cross-validate the Dot Counting Test in a large neuropsychological sample. Dot Counting Test scores were compared in credible (n = 142) and non-credible (n = 335) neuropsychology referrals. Non-credible patients scored significantly higher than credible patients on all Dot Counting Test scores. While the original E-score cut-off of ≥17 achieved excellent specificity (96.5%), it was associated with mediocre sensitivity (52.8%). However, the cut-off could be substantially lowered to ≥13.80, while still maintaining adequate specificity (≥90%), and raising sensitivity to 70.0%. Examination of non-credible subgroups revealed that Dot Counting Test sensitivity in feigned mild traumatic brain injury (mTBI) was 55.8%, whereas sensitivity was 90.6% in patients with non-credible cognitive dysfunction in the context of claimed psychosis, and 81.0% in patients with non-credible cognitive performance in depression or severe TBI. Thus, the Dot Counting Test may have a particular role in detection of non-credible cognitive symptoms in claimed psychiatric disorders. Alternative to use of the E-score, failure on ≥1 cut-offs applied to individual Dot Counting Test scores (≥6.0″ for mean grouped dot counting time, ≥10.0″ for mean ungrouped dot counting time, and ≥4 errors), occurred in 11.3% of the credible sample, while nearly two-thirds (63.6%) of the non-credible sample failed one of more of these cut-offs. An E-score cut-off of 13.80, or failure on ≥1 individual score cut-offs, resulted in few false positive identifications in credible patients, and achieved high sensitivity (64.0-70.0%), and therefore appear appropriate for use in identifying neurocognitive performance invalidity.

  15. Brief report: accuracy and response time for the recognition of facial emotions in a large sample of children with autism spectrum disorders.

    Science.gov (United States)

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander

    2014-09-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.

  16. Zirconia coated stir bar sorptive extraction combined with large volume sample stacking capillary electrophoresis-indirect ultraviolet detection for the determination of chemical warfare agent degradation products in water samples.

    Science.gov (United States)

    Li, Pingjing; Hu, Bin; Li, Xiaoyong

    2012-07-20

    In this study, a sensitive, selective and reliable analytical method by combining zirconia (ZrO₂) coated stir bar sorptive extraction (SBSE) with large volume sample stacking capillary electrophoresis-indirect ultraviolet (LVSS-CE/indirect UV) was developed for the direct analysis of chemical warfare agent degradation products of alkyl alkylphosphonic acids (AAPAs) (including ethyl methylphosphonic acid (EMPA) and pinacolyl methylphosphonate (PMPA)) and methylphosphonic acid (MPA) in environmental waters. ZrO₂ coated stir bar was prepared by adhering nanometer-sized ZrO₂ particles onto the surface of stir bar with commercial PDMS sol as adhesion agent. Due to the high affinity of ZrO₂ to the electronegative phosphonate group, ZrO₂ coated stir bars could selectively extract the strongly polar AAPAs and MPA. After systematically optimizing the extraction conditions of ZrO₂-SBSE, the analytical performance of ZrO₂-SBSE-CE/indirect UV and ZrO₂-SBSE-LVSS-CE/indirect UV was assessed. The limits of detection (LODs, at a signal-to-noise ratio of 3) obtained by ZrO₂-SBSE-CE/indirect UV were 13.4-15.9 μg/L for PMPA, EMPA and MPA. The relative standard deviations (RSDs, n=7, c=200 μg/L) of the corrected peak area for the target analytes were in the range of 6.4-8.8%. Enhancement factors (EFs) in terms of LODs were found to be from 112- to 145-fold. By combining ZrO₂ coating SBSE with LVSS as a dual preconcentration strategy, the EFs were magnified up to 1583-fold, and the LODs of ZrO₂-SBSE-LVSS-CE/indirect UV were 1.4, 1.2 and 3.1 μg/L for PMPA, EMPA, and MPA, respectively. The RSDs (n=7, c=20 μg/L) were found to be in the range of 9.0-11.8%. The developed ZrO₂-SBSE-LVSS-CE/indirect UV method has been successfully applied to the analysis of PMPA, EMPA, and MPA in different environmental water samples, and the recoveries for the spiked water samples were found to be in the range of 93.8-105.3%. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Sampling-based approaches to improve estimation of mortality among patient dropouts: experience from a large PEPFAR-funded program in Western Kenya.

    Directory of Open Access Journals (Sweden)

    Constantin T Yiannoutsos

    Full Text Available Monitoring and evaluation (M&E of HIV care and treatment programs is impacted by losses to follow-up (LTFU in the patient population. The severity of this effect is undeniable but its extent unknown. Tracing all lost patients addresses this but census methods are not feasible in programs involving rapid scale-up of HIV treatment in the developing world. Sampling-based approaches and statistical adjustment are the only scaleable methods permitting accurate estimation of M&E indices.In a large antiretroviral therapy (ART program in western Kenya, we assessed the impact of LTFU on estimating patient mortality among 8,977 adult clients of whom, 3,624 were LTFU. Overall, dropouts were more likely male (36.8% versus 33.7%; p = 0.003, and younger than non-dropouts (35.3 versus 35.7 years old; p = 0.020, with lower median CD4 count at enrollment (160 versus 189 cells/ml; p<0.001 and WHO stage 3-4 disease (47.5% versus 41.1%; p<0.001. Urban clinic clients were 75.0% of non-dropouts but 70.3% of dropouts (p<0.001. Of the 3,624 dropouts, 1,143 were sought and 621 had their vital status ascertained. Statistical techniques were used to adjust mortality estimates based on information obtained from located LTFU patients. Observed mortality estimates one year after enrollment were 1.7% (95% CI 1.3%-2.0%, revised to 2.8% (2.3%-3.1% when deaths discovered through outreach were added and adjusted to 9.2% (7.8%-10.6% and 9.9% (8.4%-11.5% through statistical modeling depending on the method used. The estimates 12 months after ART initiation were 1.7% (1.3%-2.2%, 3.4% (2.9%-4.0%, 10.5% (8.7%-12.3% and 10.7% (8.9%-12.6% respectively. CONCLUSIONS/SIGNIFICANCE ABSTRACT: Assessment of the impact of LTFU is critical in program M&E as estimated mortality based on passive monitoring may underestimate true mortality by up to 80%. This bias can be ameliorated by tracing a sample of dropouts and statistically adjust the mortality estimates to properly evaluate and guide large

  18. Cross-sectional association between ZIP code-level gentrification and homelessness among a large community-based sample of people who inject drugs in 19 US cities.

    Science.gov (United States)

    Linton, Sabriya L; Cooper, Hannah Lf; Kelley, Mary E; Karnes, Conny C; Ross, Zev; Wolfe, Mary E; Friedman, Samuel R; Jarlais, Don Des; Semaan, Salaam; Tempalski, Barbara; Sionean, Catlainn; DiNenno, Elizabeth; Wejnert, Cyprian; Paz-Bailey, Gabriela

    2017-06-20

    Housing instability has been associated with poor health outcomes among people who inject drugs (PWID). This study investigates the associations of local-level housing and economic conditions with homelessness among a large sample of PWID, which is an underexplored topic to date. PWID in this cross-sectional study were recruited from 19 large cities in the USA as part of National HIV Behavioral Surveillance. PWID provided self-reported information on demographics, behaviours and life events. Homelessness was defined as residing on the street, in a shelter, in a single room occupancy hotel, or in a car or temporarily residing with friends or relatives any time in the past year. Data on county-level rental housing unaffordability and demand for assisted housing units, and ZIP code-level gentrification (eg, index of percent increases in non-Hispanic white residents, household income, gross rent from 1990 to 2009) and economic deprivation were collected from the US Census Bureau and Department of Housing and Urban Development. Multilevel models evaluated the associations of local economic and housing characteristics with homelessness. Sixty percent (5394/8992) of the participants reported homelessness in the past year. The multivariable model demonstrated that PWID living in ZIP codes with higher levels of gentrification had higher odds of homelessness in the past year (gentrification: adjusted OR=1.11, 95% CI=1.04 to 1.17). Additional research is needed to determine the mechanisms through which gentrification increases homelessness among PWID to develop appropriate community-level interventions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Moving into a new era of periodontal genetic studies: relevance of large case-control samples using severe phenotypes for genome-wide association studies.

    Science.gov (United States)

    Vaithilingam, R D; Safii, S H; Baharuddin, N A; Ng, C C; Cheong, S C; Bartold, P M; Schaefer, A S; Loos, B G

    2014-12-01

    Studies to elucidate the role of genetics as a risk factor for periodontal disease have gone through various phases. In the majority of cases, the initial 'hypothesis-dependent' candidate-gene polymorphism studies did not report valid genetic risk loci. Following a large-scale replication study, these initially positive results are believed to be caused by type 1 errors. However, susceptibility genes, such as CDKN2BAS (Cyclin Dependend KiNase 2B AntiSense RNA; alias ANRIL [ANtisense Rna In the Ink locus]), glycosyltransferase 6 domain containing 1 (GLT6D1) and cyclooxygenase 2 (COX2), have been reported as conclusive risk loci of periodontitis. The search for genetic risk factors accelerated with the advent of 'hypothesis-free' genome-wide association studies (GWAS). However, despite many different GWAS being performed for almost all human diseases, only three GWAS on periodontitis have been published - one reported genome-wide association of GLT6D1 with aggressive periodontitis (a severe phenotype of periodontitis), whereas the remaining two, which were performed on patients with chronic periodontitis, were not able to find significant associations. This review discusses the problems faced and the lessons learned from the search for genetic risk variants of periodontitis. Current and future strategies for identifying genetic variance in periodontitis, and the importance of planning a well-designed genetic study with large and sufficiently powered case-control samples of severe phenotypes, are also discussed. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Association between the prevalence of depression and age in a large representative German sample of people aged 53 to 80 years.

    Science.gov (United States)

    Wild, Beate; Herzog, Wolfgang; Schellberg, Dieter; Lechner, Sabine; Niehoff, Doro; Brenner, Hermann; Rothenbacher, Dietrich; Stegmaier, Christa; Raum, Elke

    2012-04-01

    The aim of the study was to determine the association between the prevalence of clinically significant depression and age in a large representative sample of elderly German people. In the second follow-up (2005-2007) of the ESTHER cohort study, the 15-item geriatric depression scale (GDS-15) as well as a sociodemographic and clinical questionnaire were administered to a representative sample of 8270 people of ages 53 to 80 years. The prevalence of clinically significant depression was estimated using a GDS cut-off score of 5/6. Prevalence rates were estimated for the different age categories. Association between depression and age was analyzed using logistic regression, adjusted for gender, co-morbid medical disorders, education, marital status, physical activity, smoking, self-perceived cognitive impairment, and anti-depressive medication. Of the participants, 7878 (95.3%) completed more than twelve GDS items and were included in the study. The prevalence of clinically significant depression was 16.0% (95%CI = [15.2; 16.6]). The function of depression prevalence dependent on age group showed a U-shaped pattern (53-59: 21.0%, CI = [18.9; 23.3]; 60-64: 17.7%, CI = [15.7; 19.7]; 65-69: 12.6%, CI = [11.2; 14.0]; 70-74: 14.4%, CI = [12.6; 16.0]; 75-80: 17.1%, CI = [14.9; 19.4]). Adjusted odds ratios showed that the cha