WorldWideScience

Sample records for times larger sample

  1. Imaging samples larger than the field of view: the SLS experience

    Science.gov (United States)

    Vogiatzis Oikonomidis, Ioannis; Lovric, Goran; Cremona, Tiziana P.; Arcadu, Filippo; Patera, Alessandra; Schittny, Johannes C.; Stampanoni, Marco

    2017-06-01

    Volumetric datasets with micrometer spatial and sub-second temporal resolutions are nowadays routinely acquired using synchrotron X-ray tomographic microscopy (SRXTM). Although SRXTM technology allows the examination of multiple samples with short scan times, many specimens are larger than the field-of-view (FOV) provided by the detector. The extension of the FOV in the direction perpendicular to the rotation axis remains non-trivial. We present a method that can efficiently increase the FOV merging volumetric datasets obtained by region-of-interest tomographies in different 3D positions of the sample with a minimal amount of artefacts and with the ability to handle large amounts of data. The method has been successfully applied for the three-dimensional imaging of a small number of mouse lung acini of intact animals, where pixel sizes down to the micrometer range and short exposure times are required.

  2. Larger Neural Responses Produce BOLD Signals That Begin Earlier in Time

    Directory of Open Access Journals (Sweden)

    Serena eThompson

    2014-06-01

    Full Text Available Functional MRI analyses commonly rely on the assumption that the temporal dynamics of hemodynamic response functions (HRFs are independent of the amplitude of the neural signals that give rise to them. The validity of this assumption is particularly important for techniques that use fMRI to resolve sub-second timing distinctions between responses, in order to make inferences about the ordering of neural processes. Whether or not the detailed shape of the HRF is independent of neural response amplitude remains an open question, however. We performed experiments in which we measured responses in primary visual cortex (V1 to large, contrast-reversing checkerboards at a range of contrast levels, which should produce varying amounts of neural activity. Ten subjects (ages 22-52 were studied in each of two experiments using 3 Tesla scanners. We used rapid, 250 msec, temporal sampling (repetition time, or TR and both short and long inter-stimulus interval (ISI stimulus presentations. We tested for a systematic relationship between the onset of the HRF and its amplitude across conditions, and found a strong negative correlation between the two measures when stimuli were separated in time (long- and medium-ISI experiments, but not the short-ISI experiment. Thus, stimuli that produce larger neural responses, as indexed by HRF amplitude, also produced HRFs with shorter onsets. The relationship between amplitude and latency was strongest in voxels with lowest mean-normalized variance (i.e., parenchymal voxels. The onset differences observed in the longer-ISI experiments are likely attributable to mechanisms of neurovascular coupling, since they are substantially larger than reported differences in the onset of action potentials in V1 as a function of response amplitude.

  3. Optoelectronic time-domain characterization of a 100 GHz sampling oscilloscope

    International Nuclear Information System (INIS)

    Füser, H; Baaske, K; Kuhlmann, K; Judaschke, R; Pierz, K; Bieler, M; Eichstädt, S; Elster, C

    2012-01-01

    We have carried out an optoelectronic measurement of the impulse response of an ultrafast sampling oscilloscope with a nominal bandwidth of 100 GHz within a time window of approximately 100 ps. Our experimental technique also considers frequency components above the cut-off frequency of higher order modes of the 1.0 mm coaxial line, which is shown to be important for the specification of the impulse response of ultrafast sampling oscilloscopes. Additionally, we have measured the reflection coefficient of the sampling head induced by the mismatch of the sampling circuit and the coaxial connector which is larger than 0.5 for certain frequencies. The uncertainty analysis has been performed using the Monte Carlo method of Supplement 1 to the 'Guide to the Expression of Uncertainty in Measurement' and correlations in the estimated impulse response have been determined. Our measurements extend previous work which deals with the characterization of 70 GHz oscilloscopes and the measurement of 100 GHz oscilloscopes up to the cut-off frequency of higher order modes

  4. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  5. Practical reporting times for environmental samples

    International Nuclear Information System (INIS)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4 degrees C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions

  6. Practical reporting times for environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4{degrees}C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions.

  7. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  8. Larger miliolids of the Late Cretaceous and Paleogene seen through space and time

    Directory of Open Access Journals (Sweden)

    Vlasta Ćosović

    2002-12-01

    Full Text Available Spatial and temporal occurrences of the larger (complex miliolids are discussed to give more light on biostratigraphy and paleobiogeographic provinces distribution. Seven generaand 47 species from the Late Cretaceous to Oligocene inhabited shallow marine settings in the Indo-Pacific, Tethyan and Caribbean regions. Of all genera only four (Idalina, Periloculina, Pseudolacazina, Lacazina widespread throughout Tethys in theLate Cretaceous and Paleogene. Single occurrence of Lacazina was recorded further to east (Moluccas. By now the Late Cretaceous genus Adrahentina is known only from the Spain. The newcomer’s Eocene genera were Fabularia and Lacazinella. Fabularia reachedhigh diversity in species term in the Central and Western Tethys and occured as unique genus in Caribbean realm, too. Conversely, during the same period, Lacazinella spread over the southern border of Neo-Tethys reaching New Guinea.On the Adriatic – Dinaric Carbonate Platform, larger miliolids occurred from the Late Cretaceous to Cuisian, having the same biostratigraphically trends and distribution as contemporaneous larger miliolids from the Tethys.

  9. Time-Efficiency of Sorting Chironomidae Surface-Floating Pupal Exuviae Samples from Urban Trout Streams in Northeast Minnesota, USA

    Directory of Open Access Journals (Sweden)

    Alyssa M Anderson

    2012-10-01

    Full Text Available Collections of Chironomidae surface-floating pupal exuviae (SFPE provide an effective means of assessing water quality in streams. Although not widely used in the United States, the technique is not new and has been shown to be more cost-efficient than traditional dip-net sampling techniques in organically enriched stream in an urban landscape. The intent of this research was to document the efficiency of sorting SFPE samples relative to dip-net samples in trout streams with catchments varying in amount of urbanization and differences in impervious surface. Samples of both SFPE and dip-nets were collected from 17 sample sites located on 12 trout streams in Duluth, MN, USA. We quantified time needed to sort subsamples of 100 macroinvertebrates from dip-net samples, and less than or greater than 100 chironomid exuviae from SFPE samples. For larger samples of SFPE, the time required to subsample up to 300 exuviae was also recorded. The average time to sort subsamples of 100 specimens was 22.5 minutes for SFPE samples, compared to 32.7 minutes for 100 macroinvertebrates in dip-net samples. Average time to sort up to 300 exuviae was 37.7 minutes. These results indicate that sorting SFPE samples is more time-efficient than traditional dip-net techniques in trout streams with varying catchment characteristics.doi: 10.5324/fn.v31i0.1380.Published online: 17 October 2012.

  10. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  11. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  12. Dynamics of Transformation from Platinum Icosahedral Nanoparticles to Larger FCC Crystal at Millisecond Time Resolution

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Wenpei [Univ. of Illinois at Urbana-Champaign, IL (United States). Dept. of Materials Science and Engineering and Fredrick Seitz Materials Research Lab.; Wu, Jianbo [Univ. of Illinois at Urbana-Champaign, IL (United States). Dept. of Materials Science and Engineering, Fredrick Seitz Materials Research Lab. and Dept. of Chemical and Biomolecular Engineering; Shanghai Jiao Tong Univ. (China). School of Materials Science and Engineering; Yoon, Aram [Univ. of Illinois at Urbana-Champaign, IL (United States). Dept. of Materials Science and Engineering and Fredrick Seitz Materials Research Lab.; Lu, Ping [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Qi, Liang [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Materials Science and Engineering; Wen, Jianguo [Argonne National Lab. (ANL), Argonne, IL (United States). Center for Nanoscale Materials and Electron Microscopy Center; Miller, Dean J. [Argonne National Lab. (ANL), Argonne, IL (United States). Center for Nanoscale Materials and Electron Microscopy Center; Mabon, James C. [Univ. of Illinois at Urbana-Champaign, IL (United States). Fredrick Seitz Materials Research Lab.; Wilson, William L. [Univ. of Illinois at Urbana-Champaign, IL (United States). Dept. of Materials Science and Engineering and Fredrick Seitz Materials Research Lab.; Yang, Hong [Univ. of Illinois at Urbana-Champaign, IL (United States). Dept. of Chemical and Biomolecular Engineering; Zuo, Jian-Min [Univ. of Illinois at Urbana-Champaign, IL (United States). Dept. of Materials Science and Engineering and Fredrick Seitz Materials Research Lab.

    2017-12-08

    Atomic motion at grain boundaries is essential to microstructure development, growth and stability of catalysts and other nanostructured materials. However, boundary atomic motion is often too fast to observe in a conventional transmission electron microscope (TEM) and too slow for ultrafast electron microscopy. We report on the entire transformation process of strained Pt icosahedral nanoparticles (ICNPs) into larger FCC crystals, captured at 2.5 ms time resolution using a fast electron camera. Results show slow diffusive dislocation motion at nm/s inside ICNPs and fast surface transformation at μm/s. By characterizing nanoparticle strain, we show that the fast transformation is driven by inhomogeneous surface stress. And interaction with pre-existing defects led to the slowdown of the transformation front inside the nanoparticles. Particle coalescence, assisted by oxygen-induced surface migration at T ≥ 300°C, also played a critical role. Thus by studying transformation in the Pt ICNPs at high time and spatial resolution, we obtain critical insights into the transformation mechanisms in strained Pt nanoparticles.

  13. Detecting chaos in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  14. Why borrowers pay premiums to larger lenders: Empirical evidence from sovereign syndicated loans

    OpenAIRE

    Hallak, Issam

    2002-01-01

    All other terms being equal (e.g. seniority), syndicated loan contracts provide larger lending compensations (in percentage points) to institutions funding larger amounts. This paper explores empirically the motivation for such a price design on a sample of sovereign syndicated loans in the period 1990-1997. I find strong evidence that a larger premium is associated with higher renegotiation probability and information asymmetries. It hardly has any impact on the number of lenders though. Thi...

  15. Is Parental Involvement Lower at Larger Schools?

    Science.gov (United States)

    Walsh, Patrick

    2010-01-01

    Parents who volunteer, or who lobby for improvements in school quality, are generally seen as providing a school-wide public good. If so, straightforward public-good theory predicts that free-riding will reduce average involvement at larger schools. This study uses longitudinal data to follow families over time, as their children move from middle…

  16. Importance Sampling for Stochastic Timed Automata

    DEFF Research Database (Denmark)

    Jegourel, Cyrille; Larsen, Kim Guldstrand; Legay, Axel

    2016-01-01

    We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state-wise cha......We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state...

  17. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  18. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  19. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    International Nuclear Information System (INIS)

    Aspinall, M D; Joyce, M J; Mackin, R O; Jarrah, Z; Boston, A J; Nolan, P J; Peyton, A J; Hawkes, N P

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s −1 . Events arising from the 7 Li(p, n) 7 Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential

  20. A novel atmospheric tritium sampling system

    Science.gov (United States)

    Qin, Lailai; Xia, Zhenghai; Gu, Shaozhong; Zhang, Dongxun; Bao, Guangliang; Han, Xingbo; Ma, Yuhua; Deng, Ke; Liu, Jiayu; Zhang, Qin; Ma, Zhaowei; Yang, Guo; Liu, Wei; Liu, Guimin

    2018-06-01

    The health hazard of tritium is related to its chemical form. Sampling different chemical forms of tritium simultaneously becomes significant. Here a novel atmospheric tritium sampling system (TS-212) was developed to collect the tritiated water (HTO), tritiated hydrogen (HT) and tritiated methane (CH3T) simultaneously. It consisted of an air inlet system, three parallel connected sampling channels, a hydrogen supply module, a methane supply module and a remote control system. It worked at air flow rate of 1 L/min to 5 L/min, with temperature of catalyst furnace at 200 °C for HT sampling and 400 °C for CH3T sampling. Conversion rates of both HT and CH3T to HTO were larger than 99%. The collecting efficiency of the two-stage trap sets for HTO was larger than 96% in 12 h working-time without being blocked. Therefore, the collected efficiencies of TS-212 are larger than 95% for tritium with different chemical forms in environment. Besides, the remote control system made sampling more intelligent, reducing the operator's work intensity. Based on the performance parameters described above, the TS-212 can be used to sample atmospheric tritium in different chemical forms.

  1. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  2. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  3. Larger men have larger prostates: Detection bias in epidemiologic studies of obesity and prostate cancer risk.

    Science.gov (United States)

    Rundle, Andrew; Wang, Yun; Sadasivan, Sudha; Chitale, Dhananjay A; Gupta, Nilesh S; Tang, Deliang; Rybicki, Benjamin A

    2017-06-01

    Obesity is associated with risk of aggressive prostate cancer (PCa), but not with over-all PCa risk. However, obese men have larger prostates which may lower biopsy accuracy and cause a systematic bias toward the null in epidemiologic studies of over-all risk. Within a cohort of 6692 men followed-up after a biopsy or transurethral resection of the prostate (TURP) with benign findings, a nested case-control study was conducted of 495 prostate cancer cases and controls matched on age, race, follow-up duration, biopsy versus TURP, and procedure date. Data on body mass index and prostate volume at the time of the initial procedure were abstracted from medical records. Prior to consideration of differences in prostate volume, overweight (OR = 1.41; 95%CI 1.01, 1.97), and obese status (OR = 1.59; 95%CI 1.09, 2.33) at the time of the original benign biopsy or TURP were associated with PCa incidence during follow-up. Prostate volume did not significantly moderate the association between body-size and PCa, however it did act as an inverse confounder; adjustment for prostate volume increased the effect size for overweight by 22% (adjusted OR = 1.52; 95%CI 1.08, 2.14) and for obese status by 23% (adjusted OR = 1.77; 95%CI 1.20, 2.62). Larger prostate volume at the time of the original benign biopsy or TURP was inversely associated with PCa incidence during follow-up (OR = 0.92 per 10 cc difference in volume; 95%CI 0.88, 0.97). In analyses that stratified case-control pairs by tumor aggressiveness of the case, prostate volume acted as an inverse confounder in analyses of non-aggressive PCa but not in analyses of aggressive PCa. In studies of obesity and PCa, differences in prostate volume cause a bias toward the null, particularly in analyses of non-aggressive PCa. A pervasive underestimation of the association between obesity and overall PCa risk may exist in the literature. © 2017 Wiley Periodicals, Inc.

  4. Sampling returns for realized variance calculations: tick time or transaction time?

    NARCIS (Netherlands)

    Griffin, J.E.; Oomen, R.C.A.

    2008-01-01

    This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are

  5. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  6. Determination of selenium in food matrices by replicate sample neutron activation analysis

    International Nuclear Information System (INIS)

    Ventura, M.G.; Freitas, M.C.; Ventura, M.G.; Pacheco, A.M.G.

    2009-01-01

    The replicate sample instrumental neutron activation method was optimized and used for the determination of selenium in foodstuffs. The method was reliable, yielding accurate results. Lower detections limits were obtained after each successive irradiation. Different irradiation conditions were used depending on the type of sample. For samples with higher selenium contents (meat, fish, eggs), the measured selenium in the first replicate is in all cases larger than the detection limit, but a better accuracy was obtained with a larger number of replicates (2-3 replicates). For samples with extremely low selenium contents (vegetable samples), at least seven replicates were necessary to obtain a concentration value two times larger than the detection limit. (author)

  7. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  8. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  9. Adaptive Sampling of Time Series During Remote Exploration

    Science.gov (United States)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  10. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  11. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  12. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  13. When the proton becomes larger

    CERN Multimedia

    CERN Bulletin

    2011-01-01

    The TOTEM experiment at the LHC has just confirmed that, at high energy, protons behave as if they were becoming larger. In more technical terms, their total cross-section – a parameter linked to the proton-proton interaction probability – increases with energy. This phenomenon, expected from previous measurements performed at much lower energy, has now been confirmed for the first time at the LHC’s unprecedented energy.   One arm of a TOTEM T2 detector during its installation at interaction point 5. A composite particle like the proton is a complex system that in no way resembles a static Lego construction: sub-components move inside and interactions keep the whole thing together, but in a very dynamic way. This partly explains why even the very common proton can still be hiding secrets about its nature, decades after its discovery. One way of studying the inner properties of protons is to observe how they interact with each other, which, in technical terms, i...

  14. Identification of continuous-time systems from samples of input ...

    Indian Academy of Sciences (India)

    Abstract. This paper presents an introductory survey of the methods that have been developed for identification of continuous-time systems from samples of input±output data. The two basic approaches may be described as (i) the indirect method, where first a discrete-time model is estimated from the sampled data and then ...

  15. Algae viability over time in a ballast water sample

    Science.gov (United States)

    Gollasch, Stephan; David, Matej

    2018-03-01

    The biology of vessels' ballast water needs to be analysed for several reasons, one of these being performance tests of ballast water management systems. This analysis includes a viability assessment of phytoplankton. To overcome logistical problems to get algae sample processing gear on board of a vessel to document algae viability, samples may be transported to land-based laboratories. Concerns were raised how the storage conditions of the sample may impact algae viability over time and what the most appropriate storage conditions were. Here we answer these questions with a long-term algae viability study with daily sample analysis using Pulse-Amplitude Modulated (PAM) fluorometry. The sample was analysed over 79 days. We tested different storage conditions: fridge and room temperature with and without light. It seems that during the first two weeks of the experiment the viability remains almost unchanged with a slight downwards trend. In the continuing period, before the sample was split, a slightly stronger downwards viability trend was observed, which occurred at a similar rate towards the end of the experiment. After the sample was split, the strongest viability reduction was measured for the sample stored without light at room temperature. We concluded that the storage conditions, especially regarding temperature and light exposure, have a stronger impact on algae viability compared to the storage duration and that inappropriate storage conditions reduce algal viability. A sample storage time of up to two weeks in a dark and cool environment has little influence on the organism viability. This indicates that a two week time duration between sample taking on board a vessel and the viability measurement in a land-based laboratory may not be very critical.

  16. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  17. On the sample transport time of a pneumatic transfer system

    International Nuclear Information System (INIS)

    Kondo, Yoshihide

    1983-01-01

    The counts accumulated in measuring system are affected by the variations in transport time of the sample on cyclic activation experiments with a mechanical sample transfer system. In use of the pneumatic transfer system, which has been set up, the transport time is variable according to the differences as follows: The form, size and weight of samples, the pneumatic pressure and so on. Comprehending the relationships between the transpot time and these variable factors is essentially important to make experiments with this transfer system. (author)

  18. Ecological tolerances of Miocene larger benthic foraminifera from Indonesia

    Science.gov (United States)

    Novak, Vibor; Renema, Willem

    2018-01-01

    To provide a comprehensive palaeoenvironmental reconstruction based on larger benthic foraminifera (LBF), a quantitative analysis of their assemblage composition is needed. Besides microfacies analysis which includes environmental preferences of foraminiferal taxa, statistical analyses should also be employed. Therefore, detrended correspondence analysis and cluster analysis were performed on relative abundance data of identified LBF assemblages deposited in mixed carbonate-siliciclastic (MCS) systems and blue-water (BW) settings. Studied MCS system localities include ten sections from the central part of the Kutai Basin in East Kalimantan, ranging from late Burdigalian to Serravallian age. The BW samples were collected from eleven sections of the Bulu Formation on Central Java, dated as Serravallian. Results from detrended correspondence analysis reveal significant differences between these two environmental settings. Cluster analysis produced five clusters of samples; clusters 1 and 2 comprise dominantly MCS samples, clusters 3 and 4 with dominance of BW samples, and cluster 5 showing a mixed composition with both MCS and BW samples. The results of cluster analysis were afterwards subjected to indicator species analysis resulting in the interpretation that generated three groups among LBF taxa: typical assemblage indicators, regularly occurring taxa and rare taxa. By interpreting the results of detrended correspondence analysis, cluster analysis and indicator species analysis, along with environmental preferences of identified LBF taxa, a palaeoenvironmental model is proposed for the distribution of LBF in Miocene MCS systems and adjacent BW settings of Indonesia.

  19. Adaptive control of theophylline therapy: importance of blood sampling times.

    Science.gov (United States)

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  20. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  1. Possible Evolution of the Pulsar Braking Index from Larger than Three to About One

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H. [School of Physics and Electronic Engineering, Guangzhou University, 510006 Guangzhou (China); Kou, F. F., E-mail: htong_2005@163.com [Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi, Xinjiang 830011 (China)

    2017-03-10

    The coupled evolution of pulsar rotation and inclination angle in the wind braking model is calculated. The oblique pulsar tends to align. The pulsar alignment affects its spin-down behavior. As a pulsar evolves from the magneto-dipole radiation dominated case to the particle wind dominated case, the braking index first increases and then decreases. In the early time, the braking index may be larger than three. During the following long time, the braking index is always smaller than three. The minimum braking index is about one. This can explain the existence of a high braking index larger than three and a low braking index simultaneously. The pulsar braking index is expected to evolve from larger than three to about one. The general trend is for the pulsar braking index to evolve from the Crab-like case to the Vela-like case.

  2. Possible Evolution of the Pulsar Braking Index from Larger than Three to About One

    International Nuclear Information System (INIS)

    Tong, H.; Kou, F. F.

    2017-01-01

    The coupled evolution of pulsar rotation and inclination angle in the wind braking model is calculated. The oblique pulsar tends to align. The pulsar alignment affects its spin-down behavior. As a pulsar evolves from the magneto-dipole radiation dominated case to the particle wind dominated case, the braking index first increases and then decreases. In the early time, the braking index may be larger than three. During the following long time, the braking index is always smaller than three. The minimum braking index is about one. This can explain the existence of a high braking index larger than three and a low braking index simultaneously. The pulsar braking index is expected to evolve from larger than three to about one. The general trend is for the pulsar braking index to evolve from the Crab-like case to the Vela-like case.

  3. Method for Hot Real-Time Sampling of Gasification Products

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beam Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.

  4. MASSIVE GALAXIES ARE LARGER IN DENSE ENVIRONMENTS: ENVIRONMENTAL DEPENDENCE OF MASS–SIZE RELATION OF EARLY-TYPE GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Yongmin; Im, Myungshin; Kim, Jae-Woo, E-mail: yymx2@astro.snu.ac.kr, E-mail: mim@astro.snu.ac.kr [Center for the Exploration of the Origin of the Universe (CEOU), Astronomy Program, Department of Physics and Astronomy, Seoul National University, 599 Gwanak-ro, Gwanak-gu, Seoul, 151-742 (Korea, Republic of)

    2017-01-01

    Under the Λ cold dark matter (ΛCDM) cosmological models, massive galaxies are expected to be larger in denser environments through frequent hierarchical mergers with other galaxies. Yet, observational studies of low-redshift early-type galaxies have shown no such trend, standing as a puzzle to solve during the past decade. We analyzed 73,116 early-type galaxies at 0.1 ≤  z  < 0.15, adopting a robust nonparametric size measurement technique and extending the analysis to many massive galaxies. We find for the first time that local early-type galaxies heavier than 10{sup 11.2} M {sub ⊙} show a clear environmental dependence in mass–size relation, in such a way that galaxies are as much as 20%–40% larger in the densest environments than in underdense environments. Splitting the sample into the brightest cluster galaxies (BCGs) and non-BCGs does not affect the result. This result agrees with the ΛCDM cosmological simulations and suggests that mergers played a significant role in the growth of massive galaxies in dense environments as expected in theory.

  5. Insights into explosion dynamics at Stromboli in 2009 from ash samples collected in real-time

    Science.gov (United States)

    Taddeucci, J.; Lautze, N.; Andronico, D.; D'Auria, L.; Niemeijer, A.; Houghton, B.; Scarlato, P.

    2012-04-01

    Rapid characterization of tephra during explosive eruptions can provide valuable insights into eruptive mechanisms, also integrating other monitoring systems. Here we reveal a perspective on Stromboli's conduit processes by linking ash textures to geophysical estimates of eruption parameters of observed explosions. A three day campaign at Stromboli was undertaken by Italy's Istituto Nazionale di Geofisica e Vulcanologia (INGV) in October 2009. At this time activity was moderately intense, with an average 4 to 5, both ash-rich and ash-poor, explosions per hour at each the SW and NE vents. A total of fifteen ash samples were collected in real time. We used binocular and scanning electron microscopes to analyze the components, grain size and morphology distributions, and surface chemistry of ash particles within eight selected samples. In addition, the INGV monitoring network provided visual, thermal, and seismic information on the explosions that generated the sampled ash. In each sample, the proportion of fluidal, glassy sideromelane (as opposed to blocky, microcrystalline tachylite plus lithics), the degree of "chemical freshness" (as opposed to chemical alteration), and the average size of particles appear to correlate directly with the maximum height and the seismic amplitude of the corresponding explosion, and inversely correlate with the amount of ash erupted, as estimated by monitoring videos. These observations suggest that more violent explosions (i.e., those driven by the release of larger and more pressurized gas volumes) produce ash via the fragmentation of hotter, more fluid magma, while weaker ones mostly erupt ash-sized particles derived by the fragmentation of colder magma and incorporation of conduit wall debris. The formation of fluidal ash particles (up to Pele's hairs) requires aerodynamic deformation of a relatively low-viscosity magma, in agreement with the strong acceleration imposed upon fragmented magma clots by the rapid expansion of

  6. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  7. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  8. Sampling and Timing: A Task for the Environmetal Process

    NARCIS (Netherlands)

    Hilderink, G.H.; Broenink, Johannes F.

    2003-01-01

    Sampling and timing is considered a responsibility of the environment of controller software. In this paper we will illustrate a concept whereby an environmental process and multi-way events play an important role in applying timing for untimed CSP software architectures. We use this timing concept

  9. Target Tracking of a Linear Time Invariant System under Irregular Sampling

    Directory of Open Access Journals (Sweden)

    Jin Xue-Bo

    2012-11-01

    Full Text Available Due to event-triggered sampling in a system, or maybe with the aim of reducing data storage, tracking many applications will encounter irregular sampling time. By calculating the matrix exponential using an inverse Laplace transform, this paper transforms the irregular sampling tracking problem to the problem of tracking with time-varying parameters of a system. Using the common Kalman filter, the developed method is used to track a target for the simulated trajectory and video tracking. The results of simulation experiments have shown that it can obtain good estimation performance even at a very high irregular rate of measurement sampling time.

  10. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  11. Non-Cartesian MRI scan time reduction through sparse sampling

    NARCIS (Netherlands)

    Wajer, F.T.A.W.

    2001-01-01

    Non-Cartesian MRI Scan-Time Reduction through Sparse Sampling Magnetic resonance imaging (MRI) signals are measured in the Fourier domain, also called k-space. Samples of the MRI signal can not be taken at will, but lie along k-space trajectories determined by the magnetic field gradients. MRI

  12. In-Sample Confidence Bands and Out-of-Sample Forecast Bands for Time-Varying Parameters in Observation Driven Models

    NARCIS (Netherlands)

    Blasques, F.; Koopman, S.J.; Lasak, K.A.; Lucas, A.

    2016-01-01

    We study the performances of alternative methods for calculating in-sample confidence and out-of-sample forecast bands for time-varying parameters. The in-sample bands reflect parameter uncertainty, while the out-of-sample bands reflect not only parameter uncertainty, but also innovation

  13. Eigenvalue sensitivity of sampled time systems operating in closed loop

    Science.gov (United States)

    Bernal, Dionisio

    2018-05-01

    The use of feedback to create closed-loop eigenstructures with high sensitivity has received some attention in the Structural Health Monitoring field. Although practical implementation is necessarily digital, and thus in sampled time, work thus far has center on the continuous time framework, both in design and in checking performance. It is shown in this paper that the performance in discrete time, at typical sampling rates, can differ notably from that anticipated in the continuous time formulation and that discrepancies can be particularly large on the real part of the eigenvalue sensitivities; a consequence being important error on the (linear estimate) of the level of damage at which closed-loop stability is lost. As one anticipates, explicit consideration of the sampling rate poses no special difficulties in the closed-loop eigenstructure design and the relevant expressions are developed in the paper, including a formula for the efficient evaluation of the derivative of the matrix exponential based on the theory of complex perturbations. The paper presents an easily reproduced numerical example showing the level of error that can result when the discrete time implementation of the controller is not considered.

  14. Weighted statistical parameters for irregularly sampled time series

    Science.gov (United States)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  15. Enhanced conformational sampling using enveloping distribution sampling.

    Science.gov (United States)

    Lin, Zhixiong; van Gunsteren, Wilfred F

    2013-10-14

    To lessen the problem of insufficient conformational sampling in biomolecular simulations is still a major challenge in computational biochemistry. In this article, an application of the method of enveloping distribution sampling (EDS) is proposed that addresses this challenge and its sampling efficiency is demonstrated in simulations of a hexa-β-peptide whose conformational equilibrium encompasses two different helical folds, i.e., a right-handed 2.7(10∕12)-helix and a left-handed 3(14)-helix, separated by a high energy barrier. Standard MD simulations of this peptide using the GROMOS 53A6 force field did not reach convergence of the free enthalpy difference between the two helices even after 500 ns of simulation time. The use of soft-core non-bonded interactions in the centre of the peptide did enhance the number of transitions between the helices, but at the same time led to neglect of relevant helical configurations. In the simulations of a two-state EDS reference Hamiltonian that envelops both the physical peptide and the soft-core peptide, sampling of the conformational space of the physical peptide ensures that physically relevant conformations can be visited, and sampling of the conformational space of the soft-core peptide helps to enhance the transitions between the two helices. The EDS simulations sampled many more transitions between the two helices and showed much faster convergence of the relative free enthalpy of the two helices compared with the standard MD simulations with only a slightly larger computational effort to determine optimized EDS parameters. Combined with various methods to smoothen the potential energy surface, the proposed EDS application will be a powerful technique to enhance the sampling efficiency in biomolecular simulations.

  16. Evaluation of Legionella real-time PCR against traditional culture for routine and public health testing of water samples.

    Science.gov (United States)

    Collins, S; Stevenson, D; Walker, J; Bennett, A

    2017-06-01

    To evaluate the usefulness of Legionella qPCR alongside traditional culture for enumeration of Legionella from water samples as part of both routine and public health investigation testing. Routine water samples (n = 2002) and samples from public health investigations (n = 215) were analysed by culture and qPCR for Legionella spp., Legionella pneumophila and L. pneumophila sg-1. A negative qPCR result was highly predictive of a negative culture result for all water systems (negative predictive values, NPV from 97·4 to 100%). Positive predictive values (PPV) were lower (0-50%). Results for qPCR were generally larger than culture with average log 10 differences of 1·1 for Legionella spp. and 1·2 for L. pneumophila. Alert and action levels of 1000 and 10 000 GU per litre, respectively, are proposed for Legionella qPCR for hot and cold water systems (HCWS). The use of qPCR significantly reduced the time to results for public health investigations by rapidly identifying potential sources and ruling out others, thus enabling a more rapid and efficient response. The high NPV of qPCR supports its use to rapidly screen out negative samples without culture. Alert and action levels for Legionella qPCR for HCWS are proposed. Quantitative PCR will be a valuable tool for both routine and public health testing. This study generated comparative data of >2000 water samples by qPCR and culture. Action and alert levels have been recommended that could enable duty holders to interpret qPCR results to facilitate timely Legionella control and public health protection. © 2017 Crown copyright. Journal of Applied Microbiology © 2017 The Society for Applied Microbiology.

  17. Degradation of hydrocarbons in soil samples analyzed within accepted analytical holding times

    International Nuclear Information System (INIS)

    Jackson, J.; Thomey, N.; Dietlein, L.F.

    1992-01-01

    Samples which are collected in conjunction with subsurface investigations at leaking petroleum storage tank sites and petroleum refineries are routinely analyzed for benzene, toluene, ethylbenzene, xylenes (BTEX), and total petroleum hydrocarbons (TPH). Water samples are preserved by the addition of hydrochloric acid and maintained at four degrees centigrade prior to analysis. This is done to prevent bacterial degradation of hydrocarbons. Chemical preservation is not presently performed on soil samples. Instead, the samples are cooled and maintained at four degrees centigrade. This study was done to measure the degree of degradation of hydrocarbons in soil samples which are analyzed within accepted holding times. Soil samples were collected and representative subsamples were prepared from the initial sample. Subsamples were analyzed in triplicate for BTEX and TPH throughout the length of the approved holding times to measure the extent of sample constituent degradation prior to analysis. Findings imply that for sandy soils, BTEX and TPH concentrations can be highly dependent upon the length of time which elapses between sample collection and analysis

  18. A confirmatory holding time study for purgeable VOCs in water samples

    International Nuclear Information System (INIS)

    West, O.R.; Bayne, C.K.; Siegrist, R.L.; Holden, W.H.; Bottrell, D.W.

    1996-01-01

    Analyte stability during pre-analytical storage is essential to the accurate quantification contaminants in environmental samples. This is particularly true for volatile organic compounds (VOCS) which can easily volatilize and/or degrade during sample storage. Recognizing this, regulatory agencies require water samples be collected in vials without headspace and stored at 4 degrees C, and that analyses be conducted within 14 days, 2048 even if samples are acid-preserved. Since the selection of a 14-day holding time was largely arbitrary, the appropriateness of this requirement must be re-evaluated. The goal of the study described here was to provide regulatory agencies with the necessary data to extend the maximum holding time for properly preserved VOC water samples to 28 days

  19. Demonstrating the value of larger ensembles in forecasting physical systems

    Directory of Open Access Journals (Sweden)

    Reason L. Machete

    2016-12-01

    Full Text Available Ensemble simulation propagates a collection of initial states forward in time in a Monte Carlo fashion. Depending on the fidelity of the model and the properties of the initial ensemble, the goal of ensemble simulation can range from merely quantifying variations in the sensitivity of the model all the way to providing actionable probability forecasts of the future. Whatever the goal is, success depends on the properties of the ensemble, and there is a longstanding discussion in meteorology as to the size of initial condition ensemble most appropriate for Numerical Weather Prediction. In terms of resource allocation: how is one to divide finite computing resources between model complexity, ensemble size, data assimilation and other components of the forecast system. One wishes to avoid undersampling information available from the model's dynamics, yet one also wishes to use the highest fidelity model available. Arguably, a higher fidelity model can better exploit a larger ensemble; nevertheless it is often suggested that a relatively small ensemble, say ~16 members, is sufficient and that larger ensembles are not an effective investment of resources. This claim is shown to be dubious when the goal is probabilistic forecasting, even in settings where the forecast model is informative but imperfect. Probability forecasts for a ‘simple’ physical system are evaluated at different lead times; ensembles of up to 256 members are considered. The pure density estimation context (where ensemble members are drawn from the same underlying distribution as the target differs from the forecasting context, where one is given a high fidelity (but imperfect model. In the forecasting context, the information provided by additional members depends also on the fidelity of the model, the ensemble formation scheme (data assimilation, the ensemble interpretation and the nature of the observational noise. The effect of increasing the ensemble size is quantified by

  20. The significance of sampling time in therapeutic drug monitoring of clozapine

    DEFF Research Database (Denmark)

    Jakobsen, M I; Larsen, J R; Svensson, C K

    2017-01-01

    OBJECTIVE: Therapeutic drug monitoring (TDM) of clozapine is standardized to 12-h postdose samplings. In clinical settings, sampling time often deviates from this time point, although the importance of the deviation is unknown. To this end, serum concentrations (s-) of clozapine and its metabolite...... N-desmethyl-clozapine (norclozapine) were measured at 12 ± 1 and 2 h postdose. METHOD: Forty-six patients with a diagnosis of schizophrenia, and on stable clozapine treatment, were enrolled for hourly, venous blood sampling at 10-14 h postdose. RESULTS: Minor changes in median percentage values were...

  1. The 'Natural Laboratory', a tool for deciphering growth, lifetime and population dynamics in larger benthic foraminifera

    Science.gov (United States)

    Hohenegger, Johann

    2015-04-01

    The shells of symbiont-bearing larger benthic Foraminifera (LBF) represent the response to physiological requirements in dependence of environmental conditions. All compartments of the shell such as chambers and chamberlets accommodate the growth of the cell protoplasm and are adaptations for housing photosymbiotic algae. Investigations on the biology of LBF were predominantly based on laboratory studies. The lifetime of LBF under natural conditions is still unclear. LBF, which can build >100 chambers during their lifetime, are thought to live at least one year under natural conditions. This is supported by studies on population dynamics of eulittoral foraminifera. In species characterized by a time-restricted single reproduction period the mean size of specimens increases from small to large during lifetime simultaneously reducing individual number. This becomes more complex when two or more reproduction times are present within a one-year cycle leading to a mixture of abundant small individuals with few large specimens during the year, while keeping mean size more or less constant. This mixture is typical for most sublittoral megalospheric (gamonts or schizonts) LBF. Nothing is known on the lifetime of agamonts, the diploid asexually reproducing generation. In all hyaline LBF it is thought to be significantly longer than 1 year based on the large size and considering the mean chamber building rate of the gamont/schizonts. Observations on LBF under natural conditions have not been performed yet in the deeper sublittoral. This reflects the difficulties due to intense hydrodynamics that hinder deploying technical equipment for studies in the natural environment. Therefore, studying growth, lifetime and reproduction of sublittoral LBF under natural conditions can be performed using the so-called 'natural laboratory' in comparison with laboratory investigations. The best sampling method in the upper sublittoral from 5 to 70 m depth is by SCUBA diving. Irregular

  2. Comparison between smaller ruptured intracranial aneurysm and larger un-ruptured intracranial aneurysm: gene expression profile analysis.

    Science.gov (United States)

    Li, Hao; Li, Haowen; Yue, Haiyan; Wang, Wen; Yu, Lanbing; ShuoWang; Cao, Yong; Zhao, Jizong

    2017-07-01

    As it grows in size, an intracranial aneurysm (IA) is prone to rupture. In this study, we compared two extreme groups of IAs, ruptured IAs (RIAs) smaller than 10 mm and un-ruptured IAs (UIAs) larger than 10 mm, to investigate the genes involved in the facilitation and prevention of IA rupture. The aneurismal walls of 6 smaller saccular RIAs (size smaller than 10 mm), 6 larger saccular UIAs (size larger than 10 mm) and 12 paired control arteries were obtained during surgery. The transcription profiles of these samples were studied by microarray analysis. RT-qPCR was used to confirm the expression of the genes of interest. In addition, functional group analysis of the differentially expressed genes was performed. Between smaller RIAs and larger UIAs, 101 genes and 179 genes were significantly over-expressed, respectively. In addition, functional group analysis demonstrated that the up-regulated genes in smaller RIAs mainly participated in the cellular response to metal ions and inorganic substances, while most of the up-regulated genes in larger UIAs were involved in inflammation and extracellular matrix (ECM) organization. Moreover, compared with control arteries, inflammation was up-regulated and muscle-related biological processes were down-regulated in both smaller RIAs and larger UIAs. The genes involved in the cellular response to metal ions and inorganic substances may facilitate the rupture of IAs. In addition, the healing process, involving inflammation and ECM organization, may protect IAs from rupture.

  3. Statistical searches for microlensing events in large, non-uniformly sampled time-domain surveys: A test using palomar transient factory data

    Energy Technology Data Exchange (ETDEWEB)

    Price-Whelan, Adrian M.; Agüeros, Marcel A. [Department of Astronomy, Columbia University, 550 W 120th Street, New York, NY 10027 (United States); Fournier, Amanda P. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106 (United States); Street, Rachel [Las Cumbres Observatory Global Telescope Network, Inc., 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Ofek, Eran O. [Benoziyo Center for Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel); Covey, Kevin R. [Lowell Observatory, 1400 West Mars Hill Road, Flagstaff, AZ 86001 (United States); Levitan, David; Sesar, Branimir [Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Laher, Russ R.; Surace, Jason, E-mail: adrn@astro.columbia.edu [Spitzer Science Center, California Institute of Technology, Mail Stop 314-6, Pasadena, CA 91125 (United States)

    2014-01-20

    Many photometric time-domain surveys are driven by specific goals, such as searches for supernovae or transiting exoplanets, which set the cadence with which fields are re-imaged. In the case of the Palomar Transient Factory (PTF), several sub-surveys are conducted in parallel, leading to non-uniform sampling over its ∼20,000 deg{sup 2} footprint. While the median 7.26 deg{sup 2} PTF field has been imaged ∼40 times in the R band, ∼2300 deg{sup 2} have been observed >100 times. We use PTF data to study the trade off between searching for microlensing events in a survey whose footprint is much larger than that of typical microlensing searches, but with far-from-optimal time sampling. To examine the probability that microlensing events can be recovered in these data, we test statistics used on uniformly sampled data to identify variables and transients. We find that the von Neumann ratio performs best for identifying simulated microlensing events in our data. We develop a selection method using this statistic and apply it to data from fields with >10 R-band observations, 1.1 × 10{sup 9} light curves, uncovering three candidate microlensing events. We lack simultaneous, multi-color photometry to confirm these as microlensing events. However, their number is consistent with predictions for the event rate in the PTF footprint over the survey's three years of operations, as estimated from near-field microlensing models. This work can help constrain all-sky event rate predictions and tests microlensing signal recovery in large data sets, which will be useful to future time-domain surveys, such as that planned with the Large Synoptic Survey Telescope.

  4. Distribution of living larger benthic foraminifera in littoral environments of the United Arab Emirates

    Science.gov (United States)

    Fiorini, Flavia; Lokier, Stephen W.

    2015-04-01

    The distribution of larger benthic foraminifera in Recent littoral environment of the United Arab Emirates (Abu Dhabi and Western regions) was investigated with the aim of understanding the response of those foraminifera to an increase in water salinity. For this purpose, 100 sediment samples from nearshore shelf, beach-front, channel, lagoon, and intertidal environment were collected. Sampling was undertaken at a water depth shallower than 15 m in water with a temperature of 22 to 35˚C, a salinity ranging from 40 to 60‰ and a pH of 8. Samples were stained with rose Bengal at the moment of sample collection in order to identify living specimens. The most abundant epiphytic larger benthic foraminifera in the studied area were Peneroplis pertusus and P. planatus with less common Spirolina areatina, S. aciculate and Sorites marginalis. The living specimens of the above mentioned species with normal test growing were particularly abundant in the nearshore shelf and lagoonal samples collected on seaweed. Dead specimens were concentrated in the coarser sediments of the beach-front, probably transported from nearby environments. Shallow coastal ponds are located in the upper intertidal zone and have a maximum salinity of 60‰ and contain abundant detached seagrass. Samples collected from these ponds possess a living foraminifera assemblage dominated by Peneroplis pertusus and P. planatus. High percentages (up to 50% of the stained assemblage) of Peneroplis presented abnormality in test growth, such as the presence of multiple apertures with reduced size, deformation in the general shape of the test, irregular suture lines and abnormal coiling. The high percentage of abnormal tests reflects natural environmental stress mainly caused by high and variable salinity. The unique presence of living epiphytic species, suggests that epiphytic foraminifera may be transported into the pond together with seagrass and continued to live in the pond. This hypothesis is supported by

  5. Sampling and examination methods used for TMI-2 samples

    International Nuclear Information System (INIS)

    Marley, A.W.; Akers, D.W.; McIsaac, C.V.

    1988-01-01

    The purpose of this paper is to summarize the sampling and examination techniques that were used in the collection and analysis of TMI-2 samples. Samples ranging from auxiliary building air to core debris were collected and analyzed. Handling of the larger samples and many of the smaller samples had to be done remotely and many standard laboratory analytical techniques were modified to accommodate the extremely high radiation fields associated with these samples. The TMI-2 samples presented unique problems with sampling and the laboratory analysis of prior molten fuel debris. 14 refs., 8 figs

  6. Sampling methods for rumen microbial counts by Real-Time PCR techniques

    Directory of Open Access Journals (Sweden)

    S. Puppo

    2010-02-01

    Full Text Available Fresh rumen samples were withdrawn from 4 cannulated buffalo females fed a fibrous diets in order to quantify bacteria concentration in the rumen by Real-Time PCR techniques. To obtain DNA of a good quality from whole rumen fluid, eight (M1-M8 different pre-filtration methods (cheese cloths, glass-fibre and nylon filter in combination with various centrifugation speeds (1000, 5000 and 14,000 rpm were tested. Genomic DNA extraction was performed either on fresh or frozen samples (-20°C. The quantitative bacteria analysis was realized according to Real-Time PCR procedure for Butyrivibrio fibrisolvens reported in literature. M5 resulted the best sampling procedure allowing to obtain a suitable genomic DNA. No differences were revealed between fresh and frozen samples.

  7. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  8. The effect of short-range spatial variability on soil sampling uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Perk, Marcel van der [Department of Physical Geography, Utrecht University, P.O. Box 80115, 3508 TC Utrecht (Netherlands)], E-mail: m.vanderperk@geo.uu.nl; De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Laboratori, Misure ed Attivita di Campo, Via di Castel Romano, 100-00128 Roma (Italy); Fajgelj, Ales; Sansone, Umberto [International Atomic Energy Agency (IAEA), Agency' s Laboratories Seibersdorf, A-1400 Vienna (Austria); Jeran, Zvonka; Jacimovic, Radojko [Jozef Stefan Institute, Jamova 39, 1000 Ljubljana (Slovenia)

    2008-11-15

    This paper aims to quantify the soil sampling uncertainty arising from the short-range spatial variability of elemental concentrations in the topsoils of agricultural, semi-natural, and contaminated environments. For the agricultural site, the relative standard sampling uncertainty ranges between 1% and 5.5%. For the semi-natural area, the sampling uncertainties are 2-4 times larger than in the agricultural area. The contaminated site exhibited significant short-range spatial variability in elemental composition, which resulted in sampling uncertainties of 20-30%.

  9. The effect of short-range spatial variability on soil sampling uncertainty.

    Science.gov (United States)

    Van der Perk, Marcel; de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Fajgelj, Ales; Sansone, Umberto; Jeran, Zvonka; Jaćimović, Radojko

    2008-11-01

    This paper aims to quantify the soil sampling uncertainty arising from the short-range spatial variability of elemental concentrations in the topsoils of agricultural, semi-natural, and contaminated environments. For the agricultural site, the relative standard sampling uncertainty ranges between 1% and 5.5%. For the semi-natural area, the sampling uncertainties are 2-4 times larger than in the agricultural area. The contaminated site exhibited significant short-range spatial variability in elemental composition, which resulted in sampling uncertainties of 20-30%.

  10. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  11. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    Science.gov (United States)

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  12. Editorial Commentary: The Larger Holes or Larger Number of Holes We Drill in the Coracoid, the Weaker the Coracoid Becomes.

    Science.gov (United States)

    Brady, Paul

    2016-06-01

    The larger holes or larger number of holes we drill in the coracoid, the weaker the coracoid becomes. Thus, minimizing bone holes (both size and number) is required to lower risk of coracoid process fracture, in patients in whom transosseous shoulder acromioclavicular joint reconstruction is indicated. A single 2.4-mm-diameter tunnel drilled through both the clavicle and the coracoid lowers the risk of fracture, but the risk cannot be entirely eliminated. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  13. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  14. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  15. Method for Hot Real-Time Sampling of Pyrolysis Vapors

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.

  16. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  17. The real-time fitting of radioactive decay curves. Pt. 3. Counting during sampling

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1994-01-01

    An analysis of a least-squares method for the real-time fitting of the theoretical total count function to the actual total count from radioactive decays has been given previously for the case where counting takes place after a sample is taken. The counting may be done in a number of different counting systems which distinguish between different types or energies of radiation emitted from the sample. The method would allow real-time determination of the numbers of atoms and hence activities of the individual isotopes present and has been designated the Time Evolved Least-Squares method (TELS). If the radioactivity which is to be measured exists as an aerosol or in a form where a sample is taken at a constant rate it may be possible to count during sampling and by so doing reduce the total time required to determine the activity of the individual isotopes present. The TELS method is extended here to the case where counting and the evaluation of the activity takes place concurrently with the sampling. The functions which need to be evaluated are derived and the calculations required to implement the method are discussed. As with the TELS method of counting after sampling the technique of counting during sampling and the simultaneous evaluation of activity could be achieved in real-time. Results of testing the method by computer simulation for two counting schemes for the descendants of radon are presented. ((orig.))

  18. Discrete-Time Mixing Receiver Architecture for RF-Sampling Software-Defined Radio

    NARCIS (Netherlands)

    Ru, Z.; Klumperink, Eric A.M.; Nauta, Bram

    2010-01-01

    Abstract—A discrete-time (DT) mixing architecture for RF-sampling receivers is presented. This architecture makes RF sampling more suitable for software-defined radio (SDR) as it achieves wideband quadrature demodulation and wideband harmonic rejection. The paper consists of two parts. In the first

  19. Two General Extension Algorithms of Latin Hypercube Sampling

    Directory of Open Access Journals (Sweden)

    Zhi-zhao Liu

    2015-01-01

    Full Text Available For reserving original sampling points to reduce the simulation runs, two general extension algorithms of Latin Hypercube Sampling (LHS are proposed. The extension algorithms start with an original LHS of size m and construct a new LHS of size m+n that contains the original points as many as possible. In order to get a strict LHS of larger size, some original points might be deleted. The relationship of original sampling points in the new LHS structure is shown by a simple undirected acyclic graph. The basic general extension algorithm is proposed to reserve the most original points, but it costs too much time. Therefore, a general extension algorithm based on greedy algorithm is proposed to reduce the extension time, which cannot guarantee to contain the most original points. These algorithms are illustrated by an example and applied to evaluating the sample means to demonstrate the effectiveness.

  20. Why have microsaccades become larger?

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Nyström, Marcus; Andersson, Richard

    2014-01-01

    -trackers compared to the systems used in the classical studies, in combination with the lack of a systematic algorithmic treatment of the overshoot. We hope that awareness of these discrepancies in microsaccade dynamics across eye structures will lead to more generally accepted definitions of microsaccades....... experts. The main reason was that the overshoots were not systematically detected by the algorithm and therefore not accurately accounted for. We conclude that one reason to why the reported size of microsaccades has increased is due to the larger overshoots produced by the modern pupil-based eye...

  1. Electro-optic sampling for time resolving relativistic ultrafast electron diffraction

    International Nuclear Information System (INIS)

    Scoby, C. M.; Musumeci, P.; Moody, J.; Gutierrez, M.; Tran, T.

    2009-01-01

    The Pegasus laboratory at UCLA features a state-of-the-art electron photoinjector capable of producing ultrashort (<100 fs) high-brightness electron bunches at energies of 3.75 MeV. These beams recently have been used to produce static diffraction patterns from scattering off thin metal foils, and it is foreseen to take advantage of the ultrashort nature of these bunches in future pump-probe time-resolved diffraction studies. In this paper, single shot 2-d electro-optic sampling is presented as a potential technique for time of arrival stamping of electron bunches used for diffraction. Effects of relatively low bunch charge (a few 10's of pC) and modestly relativistic beams are discussed and background compensation techniques to obtain high signal-to-noise ratio are explored. From these preliminary tests, electro-optic sampling is suitable to be a reliable nondestructive time stamping method for relativistic ultrafast electron diffraction at the Pegasus lab.

  2. Effect of Sample Storage Temperature and Time Delay on Blood Gases, Bicarbonate and pH in Human Arterial Blood Samples.

    Science.gov (United States)

    Mohammadhoseini, Elham; Safavi, Enayat; Seifi, Sepideh; Seifirad, Soroush; Firoozbakhsh, Shahram; Peiman, Soheil

    2015-03-01

    Results of arterial blood gas analysis can be biased by pre-analytical factors, such as time interval before analysis, temperature during storage and syringe type. To investigate the effects of samples storage temperature and time delay on blood gases, bicarbonate and PH results in human arterial blood samples. 2.5 mL arterial blood samples were drawn from 45 patients via an indwelling Intraarterial catheter. Each sample was divided into five equal samples and stored in multipurpose tuberculin plastic syringes. Blood gas analysis was performed on one of five samples as soon as possible. Four other samples were divided into two groups stored at 22°C and 0°C. Blood gas analyses were repeated at 30 and 60 minutes after sampling. PaO2 of the samples stored at 0°C was increased significantly after 60 minutes (P = 0.007). The PaCO2 of the samples kept for 30 and 60 minutes at 22°C was significantly higher than primary result (P = 0.04, P samples stored at 22°C, pH decreased significantly after 30 and 60 minutes (P = 0.017, P = 0.001). There were no significant differences in other results of samples stored at 0°C or 22°C after 30 or 60 minutes. In samples stored in plastic syringes, overestimation of PaO2 levels should be noted if samples cooled before analysis. In samples stored in plastic syringes, it is not necessary to store samples in iced water when analysis delayed up to one hour.

  3. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  4. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  5. Learning Bounds of ERM Principle for Sequences of Time-Dependent Samples

    Directory of Open Access Journals (Sweden)

    Mingchen Yao

    2015-01-01

    Full Text Available Many generalization results in learning theory are established under the assumption that samples are independent and identically distributed (i.i.d.. However, numerous learning tasks in practical applications involve the time-dependent data. In this paper, we propose a theoretical framework to analyze the generalization performance of the empirical risk minimization (ERM principle for sequences of time-dependent samples (TDS. In particular, we first present the generalization bound of ERM principle for TDS. By introducing some auxiliary quantities, we also give a further analysis of the generalization properties and the asymptotical behaviors of ERM principle for TDS.

  6. Does developmental timing of exposure to child maltreatment predict memory performance in adulthood? Results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Busso, Daniel S; Raffeld, Miriam R; Smoller, Jordan W; Nelson, Charles A; Doyle, Alysa E; Luk, Gigi

    2016-01-01

    Although maltreatment is a known risk factor for multiple adverse outcomes across the lifespan, its effects on cognitive development, especially memory, are poorly understood. Using data from a large, nationally representative sample of young adults (Add Health), we examined the effects of physical and sexual abuse on working and short-term memory in adulthood. We examined the association between exposure to maltreatment as well as its timing of first onset after adjusting for covariates. Of our sample, 16.50% of respondents were exposed to physical abuse and 4.36% to sexual abuse by age 17. An analysis comparing unexposed respondents to those exposed to physical or sexual abuse did not yield any significant differences in adult memory performance. However, two developmental time periods emerged as important for shaping memory following exposure to sexual abuse, but in opposite ways. Relative to non-exposed respondents, those exposed to sexual abuse during early childhood (ages 3-5), had better number recall and those first exposed during adolescence (ages 14-17) had worse number recall. However, other variables, including socioeconomic status, played a larger role (than maltreatment) on working and short-term memory. We conclude that a simple examination of "exposed" versus "unexposed" respondents may obscure potentially important within-group differences that are revealed by examining the effects of age at onset to maltreatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. REAL-TIME PCR DETECTION OF LISTERIA MONOCYTOGENES IN FOOD SAMPLES OF ANIMAL ORIGIN

    Directory of Open Access Journals (Sweden)

    Jaroslav Pochop

    2013-02-01

    Full Text Available The aim of this study was to follow the contamination of food with Listeria monocytogenes by using Step One real time polymerase chain reaction (PCR. We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and SensiFAST SYBR Hi-ROX Kit for the real-time PCR performance. In 24 samples of food of animal origin without incubation were detected strains of Listeria monocytogenes in 15 samples (swabs. Nine samples were negative. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in food of animal origin without incubation. This could prevent infection caused by Listeria monocytogenes, and also could benefit food manufacturing companies by extending their product’s shelf-life as well as saving the cost of warehousing their food products while awaiting pathogen testing results. The rapid real-time PCR-based method performed very well compared to the conventional method. It is a fast, simple, specific and sensitive way to detect nucleic acids, which could be used in clinical diagnostic tests in the future.

  8. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  9. Estimating time to pregnancy from current durations in a cross-sectional sample

    DEFF Research Database (Denmark)

    Keiding, Niels; Kvist, Kajsa; Hartvig, Helle

    2002-01-01

    A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....

  10. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Study of the Effect of Temporal Sampling Frequency on DSCOVR Observations Using the GEOS-5 Nature Run Results. Part II; Cloud Coverage

    Science.gov (United States)

    Holdaway, Daniel; Yang, Yuekui

    2016-01-01

    This is the second part of a study on how temporal sampling frequency affects satellite retrievals in support of the Deep Space Climate Observatory (DSCOVR) mission. Continuing from Part 1, which looked at Earth's radiation budget, this paper presents the effect of sampling frequency on DSCOVR-derived cloud fraction. The output from NASA's Goddard Earth Observing System version 5 (GEOS-5) Nature Run is used as the "truth". The effect of temporal resolution on potential DSCOVR observations is assessed by subsampling the full Nature Run data. A set of metrics, including uncertainty and absolute error in the subsampled time series, correlation between the original and the subsamples, and Fourier analysis have been used for this study. Results show that, for a given sampling frequency, the uncertainties in the annual mean cloud fraction of the sunlit half of the Earth are larger over land than over ocean. Analysis of correlation coefficients between the subsamples and the original time series demonstrates that even though sampling at certain longer time intervals may not increase the uncertainty in the mean, the subsampled time series is further and further away from the "truth" as the sampling interval becomes larger and larger. Fourier analysis shows that the simulated DSCOVR cloud fraction has underlying periodical features at certain time intervals, such as 8, 12, and 24 h. If the data is subsampled at these frequencies, the uncertainties in the mean cloud fraction are higher. These results provide helpful insights for the DSCOVR temporal sampling strategy.

  12. Synchronization of Hierarchical Time-Varying Neural Networks Based on Asynchronous and Intermittent Sampled-Data Control.

    Science.gov (United States)

    Xiong, Wenjun; Patel, Ragini; Cao, Jinde; Zheng, Wei Xing

    In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.

  13. An empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil.

    Science.gov (United States)

    Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi

    2008-07-01

    Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.

  14. Laboratory sample turnaround times: do they cause delays in the ED?

    Science.gov (United States)

    Gill, Dipender; Galvin, Sean; Ponsford, Mark; Bruce, David; Reicher, John; Preston, Laura; Bernard, Stephani; Lafferty, Jessica; Robertson, Andrew; Rose-Morris, Anna; Stoneham, Simon; Rieu, Romelie; Pooley, Sophie; Weetch, Alison; McCann, Lloyd

    2012-02-01

    Blood tests are requested for approximately 50% of patients attending the emergency department (ED). The time taken to obtain the results is perceived as a common reason for delay. The objective of this study was therefore to investigate the turnaround time (TAT) for blood results and whether this affects patient length of stay (LOS) and to identify potential areas for improvement. A time-in-motion study was performed at the ED of the John Radcliffe Hospital (JRH), Oxford, UK. The duration of each of the stages leading up to receipt of 101 biochemistry and haematology results was recorded, along with the corresponding patient's LOS. The findings reveal that the mean time for haematology results to become available was 1 hour 6 minutes (95% CI: 29 minutes to 2 hours 13 minutes), while biochemistry samples took 1 hour 42 minutes (95% CI: 1 hour 1 minute to 4 hours 21 minutes), with some positive correlation noted with the patient LOS, but no significant variation between different days or shifts. With the fastest 10% of samples being reported within 35 minutes (haematology) and 1 hour 5 minutes (biochemistry) of request, our study showed that delays can be attributable to laboratory TAT. Given the limited ability to further improve laboratory processes, the solutions to improving TAT need to come from a collaborative and integrated approach that includes strategies before samples reach the laboratory and downstream review of results. © 2010 Blackwell Publishing Ltd.

  15. Influence of sampling depth and post-sampling analysis time on the ...

    African Journals Online (AJOL)

    Bacteriological analysis was carried out for samples taken at water depth and at 1, 6, 12 and 24 hours post-sampling. It was observed that the total and faecal coliform bacteria were significantly higher in the 3 m water depth samples than in the surface water samples (ANOVA, F = 59.41, 26.751, 9.82 (T.C); 46.41, 26.81, ...

  16. Simple DNA extraction of urine samples: Effects of storage temperature and storage time.

    Science.gov (United States)

    Ng, Huey Hian; Ang, Hwee Chen; Hoe, See Ying; Lim, Mae-Lynn; Tai, Hua Eng; Soh, Richard Choon Hock; Syn, Christopher Kiu-Choong

    2018-06-01

    Urine samples are commonly analysed in cases with suspected illicit drug consumption. In events of alleged sample mishandling, urine sample source identification may be necessary. A simple DNA extraction procedure suitable for STR typing of urine samples was established on the Promega Maxwell ® 16 paramagnetic silica bead platform. A small sample volume of 1.7mL was used. Samples were stored at room temperature, 4°C and -20°C for 100days to investigate the influence of storage temperature and time on extracted DNA quantity and success rate of STR typing. Samples stored at room temperature exhibited a faster decline in DNA yield with time and lower typing success rates as compared to those at 4°C and -20°C. This trend can likely be attributed to DNA degradation. In conclusion, this study presents a quick and effective DNA extraction protocol from a small urine volume stored for up to 100days at 4°C and -20°C. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Study on auto-plating process time versus recovery for polonium, Po-210 in environmental sample

    International Nuclear Information System (INIS)

    Jalal Sharib; Zaharudin Ahmad; Abdul Kadir Ishak; Norfaizal Mohamed; Ahmad Sanadi Abu Bakar; Yii Mei Wo; Kamarozaman Ishak; Siti Aminah Yusoff

    2008-08-01

    This study was carried out to evaluate time effectiveness and recovery 16 samples of 4 Kuala Muda stations during auto-plating process procedures for determination Polonium, Po 210 activity concentration in environmental sample. The study was performed using Kuala Muda sediment as sample in the same methodology. The auto-plating process runs for 4, 12, 24 and 30 hours on a silver disc for 4 samples each station, and then counted for one (1) day using an alpha spectrometry counting system. The objectives for this study is to justify on time duration for auto-plating process effecting a chemical yield of Po-209.The results showed recovery are increasing versus time and constantly at 24 hour auto-plating. Its mean, 24 hour is an optimum time for auto-plating process for determination of Polonium, Po 210 activity concentration in environmental sample. (Author)

  18. Effects of Long-Term Storage Time and Original Sampling Month on Biobank Plasma Protein Concentrations

    Directory of Open Access Journals (Sweden)

    Stefan Enroth

    2016-10-01

    Full Text Available The quality of clinical biobank samples is crucial to their value for life sciences research. A number of factors related to the collection and storage of samples may affect the biomolecular composition. We have studied the effect of long-time freezer storage, chronological age at sampling, season and month of the year and on the abundance levels of 108 proteins in 380 plasma samples collected from 106 Swedish women. Storage time affected 18 proteins and explained 4.8–34.9% of the observed variance. Chronological age at sample collection after adjustment for storage-time affected 70 proteins and explained 1.1–33.5% of the variance. Seasonal variation had an effect on 15 proteins and month (number of sun hours affected 36 proteins and explained up to 4.5% of the variance after adjustment for storage-time and age. The results show that freezer storage time and collection date (month and season exerted similar effect sizes as age on the protein abundance levels. This implies that information on the sample handling history, in particular storage time, should be regarded as equally prominent covariates as age or gender and need to be included in epidemiological studies involving protein levels.

  19. Powered bone marrow biopsy procedures produce larger core specimens, with less pain, in less time than with standard manual devices

    Directory of Open Access Journals (Sweden)

    Larry J. Miller

    2011-07-01

    Full Text Available Bone marrow sampling remains essential in the evaluation of hematopoietic and many non-hematopoietic disorders. One common limitation to these procedures is the discomfort experienced by patients. To address whether a Powered biopsy system could reduce discomfort while providing equivalent or better results, we performed a randomized trial in adult volunteers. Twenty-six subjects underwent bilateral biopsies with each device. Core samples were obtained in 66.7% of Manual insertions; 100% of Powered insertions (P=0.002. Initial mean biopsy core lengths were 11.1±4.5 mm for the Manual device; 17.0±6.8 mm for the Powered device (P<0.005. Pathology assessment for the Manual device showed a mean length of 6.1±5.6 mm, width of 1.0±0.7 mm, and volume of 11.0±10.8 mm3. Powered device measurements were mean length of 15.3±6.1 mm, width of 2.0±0.3 mm, and volume of 49.1±21.5 mm3 (P<0.001. The mean time to core ejection was 86 seconds for Manual device; 47 seconds for the Powered device (P<0.001. The mean second look overall pain score was 33.3 for the Manual device; 20.9 for the Powered (P=0.039. We conclude that the Powered biopsy device produces superior sized specimens, with less overall pain, in less time.

  20. Scheduling sampling to maximize information about time dependence in experiments with limited resources

    DEFF Research Database (Denmark)

    Græsbøll, Kaare; Christiansen, Lasse Engbo

    2013-01-01

    Looking for periodicity in sampled data requires that periods (lags) of different length are represented in the sampling plan. We here present a method to assist in planning of temporal studies with sparse resources, which optimizes the number of observed time lags for a fixed amount of samples w...

  1. A sampling approach to constructing Lyapunov functions for nonlinear continuous–time systems

    NARCIS (Netherlands)

    Bobiti, R.V.; Lazar, M.

    2016-01-01

    The problem of constructing a Lyapunov function for continuous-time nonlinear dynamical systems is tackled in this paper via a sampling-based approach. The main idea of the sampling-based method is to verify a Lyapunov-type inequality for a finite number of points (known state vectors) in the

  2. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  3. Larger foraminifera of the Devil's Den and Blue Hole sinkholes, Florida

    Science.gov (United States)

    Cotton, Laura J.; Eder, Wolfgang; Floyd, James

    2018-03-01

    Shallow-water carbonate deposits are well-known from the Eocene of the US Gulf Coast and Caribbean. These deposits frequently contain abundant larger benthic foraminifera (LBF). However, whilst integrated stratigraphic studies have helped to refine the timing of LBF overturning events within the Tethys and Indo-Pacific regions with respect to global bio- and chemo-stratigraphic records, little recent work has been carried out in the Americas. The American LBF assemblages are distinctly different from those of Europe and the Indo-Pacific. It is therefore essential that the American bio-province is included in studies of LBF evolution, biodiversity and climate events to understand these processes on a global scale.Here we present the LBF ranges from two previously unpublished sections spanning 35 and 29 m of the upper Eocene Ocala limestone, as the early stages of a larger project addressing the taxonomy and biostratigraphy of the LBF of Florida. The study indicates that the lower member of the Ocala limestone may be Bartonian rather than Priabonian in age, with implications for the biostratigraphy of the region. In addition, the study highlights the need for multiple sites to assess the LBF assemblages and fully constrain ranges across Florida and the US Gulf and suggests potential LBF events for future integrated stratigraphic study.

  4. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  5. Studies of $b$-tagging performance and jet substructure in a high $p_\\rm{T}$ $g\\rightarrow b\\bar{b}$ rich sample of large-$R$ jets from $pp$ collisions at $\\sqrt{s}=8$ TeV with the ATLAS detector

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note summarizes studies of $b$-tagging performance and the modelling of jet properties in high $p_{\\rm T}$, double $b$-tagged, large-$R$ jets from $\\sqrt{s} = 8 TeV$ $pp$ collisions collected by the ATLAS detector at the Large Hadron Collider. The double $b$-tag requirement yields a sample rich in jets originating from the $g\\rightarrow b\\bar{b}$ process. Using this sample, the performance of $b$-tagging at small $b$-quark angular separations is probed, and the modeling of jet properties, including substructure variables, is examined. Good agreement between data and Monte Carlo simulation is found within the experimental uncertainties.

  6. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  7. SME routes for innovation collaboration with larger enterprises

    DEFF Research Database (Denmark)

    Brink, Tove

    2017-01-01

    The research in this paper reveals how Small and Medium-sized Enterprises (SMEs) can contribute to industry competiveness through collaboration with larger enterprises. The research is based on a longitudinal qualitative case study starting in 2011 with 10 SME offshore wind farm suppliers...... and follow-up interviews in 2013. The research continued with a second approach in 2014 within operation and maintenance (O&M) through focus group interviews and subsequent individual interviews with 20 enterprises and a seminar in May 2015. The findings reveal opportunities and challenges for SMEs according...... to three different routes for cooperation and collaboration with larger enterprises: demand-driven cooperation, supplier-driven cooperation and partnerdriven collaboration. The SME contribution to innovation and competiveness is different within the three routes and ranges from providing specific knowledge...

  8. A Real-Time PCR Detection of Genus Salmonella in Meat and Milk Samples

    Directory of Open Access Journals (Sweden)

    Jaroslav Pochop

    2013-05-01

    Full Text Available The aim of this study was follow the contamination of ready to eat milk and meat products with Salmonella spp. by using the Step One real-time PCR. Classical microbiological methods for detection of food-borne bacteria involve the use of pre-enrichment and/or specific enrichment, followed by the isolation of the bacteria in solid media and a final confirmation by biochemical and/or serological tests. We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and SensiFAST SYBR Hi-ROX Kit for the real-time PCR performance. In the investigated samples without incubation we could detect strain of Salmonella sp. in five out of twenty three samples (swabs. This Step One real-time PCR assay is extremely useful for any laboratory in possession of a real-time PCR. It is a fast, reproducible, simple, specific and sensitive way to detect nucleic acids, which could be used in clinical diagnostic tests in the future. Our results indicated that the Step One real-time PCR assay developed in this study could sensitively detect Salmonella spp. in ready to eat food.

  9. Sampling Methodologies for Epidemiologic Surveillance of Men Who Have Sex with Men and Transgender Women in Latin America: An Empiric Comparison of Convenience Sampling, Time Space Sampling, and Respondent Driven Sampling

    OpenAIRE

    Clark, J. L.; Konda, K. A.; Silva-Santisteban, A.; Peinado, J.; Lama, J. R.; Kusunoki, L.; Perez-Brumer, A.; Pun, M.; Cabello, R.; Sebastian, J. L.; Suarez-Ognio, L.; Sanchez, J.

    2014-01-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June-August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through T...

  10. Sequencing Larger Intact Proteins (30-70 kDa) with Activated Ion Electron Transfer Dissociation

    Science.gov (United States)

    Riley, Nicholas M.; Westphall, Michael S.; Coon, Joshua J.

    2018-01-01

    The analysis of intact proteins via mass spectrometry can offer several benefits to proteome characterization, although the majority of top-down experiments focus on proteoforms in a relatively low mass range (AI-ETD) to proteins in the 30-70 kDa range. AI-ETD leverages infrared photo-activation concurrent to ETD reactions to improve sequence-informative product ion generation. This method generates more product ions and greater sequence coverage than conventional ETD, higher-energy collisional dissociation (HCD), and ETD combined with supplemental HCD activation (EThcD). Importantly, AI-ETD provides the most thorough protein characterization for every precursor ion charge state investigated in this study, making it suitable as a universal fragmentation method in top-down experiments. Additionally, we highlight several acquisition strategies that can benefit characterization of larger proteins with AI-ETD, including combination of spectra from multiple ETD reaction times for a given precursor ion, multiple spectral acquisitions of the same precursor ion, and combination of spectra from two different dissociation methods (e.g., AI-ETD and HCD). In all, AI-ETD shows great promise as a method for dissociating larger intact protein ions as top-down proteomics continues to advance into larger mass ranges. [Figure not available: see fulltext.

  11. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  12. Wuchereria bancrofti in Tanzania: microfilarial periodicity and effect of blood sampling time on microfilarial intensities

    DEFF Research Database (Denmark)

    Simonsen, Poul Erik; Niemann, L.; Meyrowitsch, Dan Wolf

    1997-01-01

    The circadian periodicity of Wuchereria bancrofti microfilarial (mf) intensities in peripheral blood was analysed in a group of infected individuals from an endemic community in north-eastern Tanzania. The mf density was quantified at two-hourly intervals for 24 hours. A clear nocturnal periodic...... of blood sampling before peak time is discussed, and the importance of taking sampling time into consideration when analysing data from epidemiological studies is emphasized. A simple method is devised which can be used to adjust for the influence of time on mf intensities, in studies where accurate...... information on mf intensities is necessary, and where it is impossible to obtain all samples at peak time....

  13. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  14. Defining an optimum pumping-time requirement for sampling ground-water wells on the Hanford site

    International Nuclear Information System (INIS)

    Scharnhorst, N.L.

    1982-04-01

    The objective was to determine the optimum time period necessary to pump water from a well before a representative sample of the ground water can be obtained. It was assumed that a representative sample has been collected if the concentration of chemical parameters is the same in a number of samples taken consecutively, so that the concentration of parameters does not vary with time of collection. Ground-water samples used in this project were obtained by pumping selected wells on the Hanford Site. At each well, samples were taken at two minute intervals, and on each sample various chemical analyses were performed. Samples were checked for pH, sulfate, iron, specific conductivity, chloride, nitrate and alkalinity. The data showed that pH, alkalinity, sulfate and specific conductivity levels stabilized almost immediately after pumping of the well began. In many wells, the chloride and nitrate levels were unstable throughout the 38-minute sampling period. Iron levels, however, did not behave in either fashion. The concentration of iron in the samples was high when pumping began but dropped rapidly as pumping continued. The best explanation for this is that iron is flushed from the sides of the casing into the well when pumping begins. After several minutes of pumping, most of the dissolved iron is washed from the well casing and the iron concentration reaches a stable plateau representative of the iron concentration in the ground water.Since iron concentration takes longest to stabilize, the optimum pumping time for a well is based on the iron stabilization time for that well

  15. Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series

    DEFF Research Database (Denmark)

    Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier

    2016-01-01

    In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...

  16. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    Science.gov (United States)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  17. Real-time recursive hyperspectral sample and band processing algorithm architecture and implementation

    CERN Document Server

    Chang, Chein-I

    2017-01-01

    This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.

  18. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  19. The U-tube sampling methodology and real-time analysis of geofluids

    International Nuclear Information System (INIS)

    Freifeld, Barry; Perkins, Ernie; Underschultz, James; Boreham, Chris

    2009-01-01

    The U-tube geochemical sampling methodology, an extension of the porous cup technique proposed by Wood (1973), provides minimally contaminated aliquots of multiphase fluids from deep reservoirs and allows for accurate determination of dissolved gas composition. The initial deployment of the U-tube during the Frio Brine Pilot CO 2 storage experiment, Liberty County, Texas, obtained representative samples of brine and supercritical CO 2 from a depth of 1.5 km. A quadrupole mass spectrometer provided real-time analysis of dissolved gas composition. Since the initial demonstration, the U-tube has been deployed for (1) sampling of fluids down gradient of the proposed Yucca Mountain High-Level Waste Repository, Armagosa Valley, Nevada (2) acquiring fluid samples beneath permafrost in Nunuvut Territory, Canada, and (3) at a CO 2 storage demonstration project within a depleted gas reservoir, Otway Basin, Victoria, Australia. The addition of in-line high-pressure pH and EC sensors allows for continuous monitoring of fluid during sample collection. Difficulties have arisen during U-tube sampling, such as blockage of sample lines from naturally occurring waxes or from freezing conditions; however, workarounds such as solvent flushing or heating have been used to address these problems. The U-tube methodology has proven to be robust, and with careful consideration of the constraints and limitations, can provide high quality geochemical samples.

  20. Stable isotope stratigraphy and larger benthic foraminiferal extinctions in the Melinau Limestone, Sarawak

    Science.gov (United States)

    Cotton, Laura J.; Pearson, Paul N.; Renema, Willem

    2014-01-01

    Important long-ranging groups of larger benthic foraminifera (LBF) are known to have become extinct during a period of global cooling and climate disruption at the Eocene-Oligocene transition (EOT) but the precise timing and mechanisms are uncertain. Recent study showed unexpectedly that the LBF extinction in Tanzania occurs very close to the Eocene/Oligocene boundary, as recognised by the extinction of the planktonic foraminiferal Family Hantkeninidae, rather than at the later period of maximum global ice growth and sea-level fall, as previously thought. Here we investigate the same phase of extinction in the Melinau Limestone of Sarawak, on the island of Borneo, Malaysia one of the most complete carbonate successions spanning the Eocene to Lower Miocene. Assemblages of LBF from the Melinau Limestone were studied extensively by Geoffrey Adams during the 1960s-80s, confirming a major extinction during the EOT, but the section lacked independent means of correlation. By analysing rock samples originally studied by Adams and now in the Natural History Museum, London, we provide new bulk stable isotope (δ13C and δ18O) records. This enables us to identify, albeit tentatively, the level of maximum stable isotope excursion and show that the LBF extinction event in the Melinau Limestone occurs below this isotope excursion, supporting the results from Tanzania and indicating that the extinction of LBF close to the Eocene/Oligocene boundary may be a global phenomenon.

  1. Impact of Alternative Inputs and Grooming Methods on Large-R Jet Reconstruction in ATLAS

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    During Run 1 of the LHC, the optimal reconstruction algorithm for large-$R$ jets in ATLAS, characterized in terms of the ability to discriminate signal from background and robust reconstruction in the presence of pileup, was found to be anti-$k_{t}$ jets with a radius parameter of 1.0, formed from locally calibrated topological calorimeter cell clusters and groomed with the trimming algorithm to remove contributions from pileup and underlying event. Since that time, much theoretical, phenomenological, and experimental work has been performed to improve both the reconstruction of the jet inputs as well as the grooming techniques applied to reconstructed jets. In this work, an inclusive survey of both pileup mitigation algorithms applied to calorimeter cell clusters and grooming algorithms is done to study their pileup stability and ability to identify hadronically decaying W bosons within the ATLAS experiment. It is found that compared to the conventional reconstruction algorithm of large-$R$ trimmed jets form...

  2. Beyond Panglossian Optimism: Larger N2 Amplitudes Probably Signal a Bilingual Disadvantage in Conflict Monitoring

    Directory of Open Access Journals (Sweden)

    Kenneth R. Paap

    2015-01-01

    Full Text Available In this special issue on the brain mechanisms that lead to cognitive benefits of bilingualism we discussed six reasons why it will be very difficult to discover those mechanisms. Many of these problems apply to the article by Fernandez, Acosta, Douglass, Doshi, and Tartar that also appears in the special issue. These concerns include the following: 1 an overly optimistic assessment of the replicability of bilingual advantages in behavioral studies, 2 reliance on risky small samples sizes, 3 failures to match the samples on demographic characteristics such as immigrant status, and 4 language group differences that occur in neural measures (i.e., N2 amplitude, but not in the behavioral data. Furthermore the N2 amplitude measure in general suffers from valence ambiguity: larger N2 amplitudes reported for bilinguals are more likely to reflect poorer conflict resolution rather than enhanced inhibitory control.

  3. Gas-driven pump for ground-water samples

    Science.gov (United States)

    Signor, Donald C.

    1978-01-01

    Observation wells installed for artificial-recharge research and other wells used in different ground-water programs are frequently cased with small-diameter steel pipe. To obtain samples from these small-diameter wells in order to monitor water quality, and to calibrate solute-transport models, a small-diameter pump with unique operating characteristics is required that causes a minimum alternation of samples during field sampling. A small-diameter gas-driven pump was designed and built to obtain water samples from wells of two-inch diameter or larger. The pump is a double-piston type with the following characteristics: (1) The water sample is isolated from the operating gas, (2) no source of electricity is ncessary, (3) operation is continuous, (4) use of compressed gas is efficient, and (5) operation is reliable over extended periods of time. Principles of operation, actual operation techniques, gas-use analyses and operating experience are described. Complete working drawings and a component list are included. Recent modifications and pump construction for high-pressure applications also are described. (Woodard-USGS)

  4. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  5. Air exposure and sample storage time influence on hydrogen release from tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Moshkunov, K.A., E-mail: moshkunov@gmail.co [National Research Nuclear University ' MEPhI' , Kashirskoe sh. 31, 115409 Moscow (Russian Federation); Schmid, K.; Mayer, M. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Kurnaev, V.A.; Gasparyan, Yu.M. [National Research Nuclear University ' MEPhI' , Kashirskoe sh. 31, 115409 Moscow (Russian Federation)

    2010-09-30

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D{sub 2}O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of {approx}300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  6. Air exposure and sample storage time influence on hydrogen release from tungsten

    International Nuclear Information System (INIS)

    Moshkunov, K.A.; Schmid, K.; Mayer, M.; Kurnaev, V.A.; Gasparyan, Yu.M.

    2010-01-01

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D 2 O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of ∼300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  7. Air exposure and sample storage time influence on hydrogen release from tungsten

    Science.gov (United States)

    Moshkunov, K. A.; Schmid, K.; Mayer, M.; Kurnaev, V. A.; Gasparyan, Yu. M.

    2010-09-01

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D 2O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of ˜300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  8. Chromosomal radiosensitivity of human leucocytes in relation to sampling time

    International Nuclear Information System (INIS)

    Buul, P.P.W. van; Natarajan, A.T.

    1980-01-01

    Frequencies of chromosomal aberrations after irradiation with X-rays of peripheral blood lymphocytes in vitro were determined at different times after initiation of cultures. In each culture, the kinetics of cell multiplication was followed by using BrdU labelling and differential staining of chromosomes. The results indicate that the mixing up of first and second cell cycle cells at later sampling times cannot explain the observed variation in the frequencies of chromosomal aberrations but that donor-to-donor variation is a predominant factor influencing yields of aberrations. The condition of a donor seems to be most important because repeats on the same donor also showed marked variability. (orig.)

  9. Measuring Sulfur Isotope Ratios from Solid Samples with the Sample Analysis at Mars Instrument and the Effects of Dead Time Corrections

    Science.gov (United States)

    Franz, H. B.; Mahaffy, P. R.; Kasprzak, W.; Lyness, E.; Raaen, E.

    2011-01-01

    The Sample Analysis at Mars (SAM) instrument suite comprises the largest science payload on the Mars Science Laboratory (MSL) "Curiosity" rover. SAM will perform chemical and isotopic analysis of volatile compounds from atmospheric and solid samples to address questions pertaining to habitability and geochemical processes on Mars. Sulfur is a key element of interest in this regard, as sulfur compounds have been detected on the Martian surface by both in situ and remote sensing techniques. Their chemical and isotopic composition can belp constrain environmental conditions and mechanisms at the time of formation. A previous study examined the capability of the SAM quadrupole mass spectrometer (QMS) to determine sulfur isotope ratios of SO2 gas from a statistical perspective. Here we discuss the development of a method for determining sulfur isotope ratios with the QMS by sampling SO2 generated from heating of solid sulfate samples in SAM's pyrolysis oven. This analysis, which was performed with the SAM breadboard system, also required development of a novel treatment of the QMS dead time to accommodate the characteristics of an aging detector.

  10. Trophic interactions between larger crocodylians and giant tortoises on Aldabra Atoll, Western Indian Ocean, during the Late Pleistocene.

    Science.gov (United States)

    Scheyer, Torsten M; Delfino, Massimo; Klein, Nicole; Bunbury, Nancy; Fleischer-Dogley, Frauke; Hansen, Dennis M

    2018-01-01

    Today, the UNESCO World Heritage Site of Aldabra Atoll is home to about 100 000 giant tortoises, Aldabrachelys gigantea , whose fossil record goes back to the Late Pleistocene. New Late Pleistocene fossils (age ca . 90-125 000 years) from the atoll revealed some appendicular bones and numerous shell fragments of giant tortoises and cranial and postcranial elements of crocodylians. Several tortoise bones show circular holes, pits and scratch marks that are interpreted as bite marks of crocodylians. The presence of a Late Pleistocene crocodylian species, Aldabrachampsus dilophus , has been known for some time, but the recently found crocodylian remains presented herein are distinctly larger than those previously described. This indicates the presence of at least some larger crocodylians, either of the same or of a different species, on the atoll. These larger crocodylians, likely the apex predators in the Aldabra ecosystem at the time, were well capable of inflicting damage on even very large giant tortoises. We thus propose an extinct predator-prey interaction between crocodylians and giant tortoises during the Late Pleistocene, when both groups were living sympatrically on Aldabra, and we discuss scenarios for the crocodylians directly attacking the tortoises or scavenging on recently deceased animals.

  11. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    Science.gov (United States)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  12. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  13. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  14. Standardised Resting Time Prior to Blood Sampling and Diurnal Variation Associated with Risk of Patient Misclassification

    DEFF Research Database (Denmark)

    Bøgh Andersen, Ida; Brasen, Claus L.; Christensen, Henry

    2015-01-01

    .9×10-7) and sodium (p = 8.7×10-16). Only TSH and albumin were clinically significantly influenced by diurnal variation. Resting time had no clinically significant effect. CONCLUSIONS: We found no need for resting 15 minutes prior to blood sampling. However, diurnal variation was found to have a significant......BACKGROUND: According to current recommendations, blood samples should be taken in the morning after 15 minutes' resting time. Some components exhibit diurnal variation and in response to pressures to expand opening hours and reduce waiting time, the aims of this study were to investigate...... the impact of resting time prior to blood sampling and diurnal variation on biochemical components, including albumin, thyrotropin (TSH), total calcium and sodium in plasma. METHODS: All patients referred to an outpatient clinic for blood sampling were included in the period Nov 2011 until June 2014 (opening...

  15. Impact of collection container material and holding times on sample integrity for mercury and methylmercury in water

    Energy Technology Data Exchange (ETDEWEB)

    Riscassi, Ami L [ORNL; Miller, Carrie L [ORNL; Brooks, Scott C [ORNL

    2014-01-01

    Mercury (Hg) and methylmercury (MeHg) concentrations in streamwater can vary on short timescales (hourly or less) during storm flow and on a diel cycle; the frequency and timing of sampling required to accurately characterize these dynamics may be difficult to accomplish manually. Automated sampling can assist in sample collection; however use has been limited for Hg and MeHg analysis due to stability concerns of trace concentrations during extended storage times. We examined the viability of using automated samplers with disposable low-density polyethylene (LDPE) sample bags to collect industrially contaminated streamwater for unfiltered and filtered Hg and MeHg analysis. Specifically we investigated the effect of holding times ranging from hours to days on streamwater collected during baseflow and storm flow. Unfiltered and filtered Hg and MeHg concentrations decreased with increases in time prior to sample processing; holding times of 24 hours or less resulted in concentration changes (mean 11 7% different) similar to variability in duplicates collected manually during analogous field conditions (mean 7 10% different). Comparisons of samples collected with manual and automated techniques throughout a year for a wide range of stream conditions were also found to be similar to differences observed between duplicate grab samples. These results demonstrate automated sampling into LDPE bags with holding times of 24 hours or less can be effectively used to collect streamwater for Hg and MeHg analysis, and encourage the testing of these materials and methods for implementation in other aqueous systems where high-frequency sampling is warranted.

  16. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  17. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  18. Air Monitoring: New Advances in Sampling and Detection

    Directory of Open Access Journals (Sweden)

    Nicola Watson

    2011-01-01

    Full Text Available As the harmful effects of low-level exposure to hazardous organic air pollutants become more evident, there is constant pressure to improve the detection limits of indoor and ambient air monitoring methods, for example, by collecting larger air volumes and by optimising the sensitivity of the analytical detector. However, at the other end of the scale, rapid industrialisation in the developing world and growing pressure to reclaim derelict industrial land for house building is driving the need for air monitoring methods that can reliably accommodate very-high-concentration samples in potentially aggressive matrices. This paper investigates the potential of a combination of two powerful gas chromatography—based analytical enhancements—sample preconcentration/thermal desorption and time-of-flight mass spectrometry—to improve quantitative and qualitative measurement of very-low-(ppt level organic chemicals, even in the most complex air samples. It also describes new, practical monitoring options for addressing equally challenging high-concentration industrial samples.

  19. Origami-inspired metamaterial absorbers for improving the larger-incident angle absorption

    International Nuclear Information System (INIS)

    Shen, Yang; Pang, Yongqiang; Wang, Jiafu; Ma, Hua; Pei, Zhibin; Qu, Shaobo

    2015-01-01

    When a folded resistive patch array stands up on a metallic plane, it can exhibit more outstanding absorption performance. Our theoretical investigations and simulations demonstrated that the folded resistive patch arrays can enhance the absorption bandwidth progressively with the increase of the incident angle for the oblique transverse magnetic incidence, which is contrary to the conventional resistive frequency selective surface absorber. On illumination, we achieved a 3D structure metamaterial absorber with the folded resistive patches. The proposed absorber is obtained from the inspiration of the origami, and it has broadband and lager-incident angle absorption. Both the simulations and the measurements indicate that the proposed absorber achieves the larger-incident angle absorption until 75° in the frequency band of 3.6–11.4 GHz. In addition, the absorber is extremely lightweight. The areal density of the fabricated sample is about 0.023 g cm −2 . Due to the broadband and lager-incident angle absorption, it is expected that the absorbers may find potential applications such as stealth technologies and electromagnetic interference. (paper)

  20. STAR FORMATION LAWS: THE EFFECTS OF GAS CLOUD SAMPLING

    International Nuclear Information System (INIS)

    Calzetti, D.; Liu, G.; Koda, J.

    2012-01-01

    Recent observational results indicate that the functional shape of the spatially resolved star formation-molecular gas density relation depends on the spatial scale considered. These results may indicate a fundamental role of sampling effects on scales that are typically only a few times larger than those of the largest molecular clouds. To investigate the impact of this effect, we construct simple models for the distribution of molecular clouds in a typical star-forming spiral galaxy and, assuming a power-law relation between star formation rate (SFR) and cloud mass, explore a range of input parameters. We confirm that the slope and the scatter of the simulated SFR-molecular gas surface density relation depend on the size of the sub-galactic region considered, due to stochastic sampling of the molecular cloud mass function, and the effect is larger for steeper relations between SFR and molecular gas. There is a general trend for all slope values to tend to ∼unity for region sizes larger than 1-2 kpc, irrespective of the input SFR-cloud relation. The region size of 1-2 kpc corresponds to the area where the cloud mass function becomes fully sampled. We quantify the effects of selection biases in data tracing the SFR, either as thresholds (i.e., clouds smaller than a given mass value do not form stars) or as backgrounds (e.g., diffuse emission unrelated to current star formation is counted toward the SFR). Apparently discordant observational results are brought into agreement via this simple model, and the comparison of our simulations with data for a few galaxies supports a steep (>1) power-law index between SFR and molecular gas.

  1. Sampling times influence the estimate of parameters in the Weibull dissolution model

    Czech Academy of Sciences Publication Activity Database

    Čupera, J.; Lánský, Petr; Šklubalová, Z.

    2015-01-01

    Roč. 78, Oct 12 (2015), s. 171-176 ISSN 0928-0987 Institutional support: RVO:67985823 Keywords : dissolution * Fisher information * rate constant * optimal sampling times Subject RIV: BA - General Mathematics Impact factor: 3.773, year: 2015

  2. Listing of nuclear power plant larger than 100 MWe

    International Nuclear Information System (INIS)

    McHugh, B.

    1976-03-01

    This report contains a list of all nuclear power plants larger than 100 MWe, printed out from the Argus Data Bank at Chalmers University of Technology in Sweden. The plants are listed by NSSS supply. (M.S.)

  3. Sample pooling for real-time PCR detection and virulence determination of the footrot pathogen Dichelobacter nodosus.

    Science.gov (United States)

    Frosth, Sara; König, Ulrika; Nyman, Ann-Kristin; Aspán, Anna

    2017-09-01

    Dichelobacter nodosus is the principal cause of ovine footrot and strain virulence is an important factor in disease severity. Therefore, detection and virulence determination of D. nodosus is important for proper diagnosis of the disease. Today this is possible by real-time PCR analysis. Analysis of large numbers of samples is costly and laborious; therefore, pooling of individual samples is common in surveillance programs. However, pooling can reduce the sensitivity of the method. The aim of this study was to develop a pooling method for real-time PCR analysis that would allow sensitive detection and simultaneous virulence determination of D. nodosus. A total of 225 sheep from 17 flocks were sampled using ESwabs within the Swedish Footrot Control Program in 2014. Samples were first analysed individually and then in pools of five by real-time PCR assays targeting the 16S rRNA and aprV2/B2 genes of D. nodosus. Each pool consisted of four negative and one positive D. nodosus samples with varying amounts of the bacterium. In the individual analysis, 61 (27.1%) samples were positive in the 16S rRNA and the aprV2/B2 PCR assays and 164 (72.9%) samples were negative. All samples positive in the aprV2/B2 PCR-assay were of aprB2 variant. The pooled analysis showed that all 41 pools were also positive for D. nodosus 16S rRNA and the aprB2 variant. The diagnostic sensitivity for pooled and individual samples was therefore similar. Our method includes concentration of the bacteria before DNA-extraction. This may account for the maintenance of diagnostic sensitivity. Diagnostic sensitivity in the real-time PCR assays of the pooled samples were comparable to the sensitivity obtained for individually analysed samples. Even sub-clinical infections were able to be detected in the pooled PCR samples which is important for control of the disease. This method may therefore be implemented in footrot control programs where it can replace analysis of individual samples.

  4. Analysis of submicrogram samples by INAA

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, D J [National Aeronautics and Space Administration, Houston, TX (USA). Lyndon B. Johnson Space Center

    1990-12-20

    Procedure have been developed to increase the sensitivity of instrumental neutron activation analysis (INAA) so that cosmic-dust samples weighing only 10{sup -9}-10{sup -7} g are routinely analyzed for a sizable number of elements. The primary differences from standard techniques are: (1) irradiation of the samples is much more intense, (2) gamma ray assay of the samples is done using long counting times and large Ge detectors that are operated in an excellent low-background facility, (3) specially prepared glass standards are used, (4) samples are too small to be weighed routinely and concentrations must be obtained indirectly, (5) sample handling is much more difficult, and contamination of small samples with normally insignificant amounts of contaminants is difficult to prevent. In spite of the difficulties, INAA analyses have been done on 15 cosmic-dust particles and a large number of other stratospheric particles. Two-sigma detection limits for some elements are in the range of femtograms (10{sup -15} g), e.g. Co=11, Sc=0.9, Sm=0.2 A particle weighing just 0.2 ng was analyzed, obtaining abundances with relative analytical uncertainties of less than 10% for four elements (Fe, Co, Ni and Sc), which were sufficient to allow identification of the particle as chondritic interplanetary dust. Larger samples allow abundances of twenty or more elements to be obtained. (orig.).

  5. Annual spatiotemporal migration schedules in three larger insectivorous birds

    DEFF Research Database (Denmark)

    Jacobsen, Lars Bo; Jensen, Niels Odder; Willemoes, Mikkel

    2017-01-01

    Background: Knowledge of spatiotemporal migration patterns is important for our understanding of migration ecology and ultimately conservation of migratory species. We studied the annual migration schedules of European nightjar, a large nocturnal insectivore and compared it with two other larger ...

  6. The use of mini-samples in palaeomagnetism

    Science.gov (United States)

    Böhnel, Harald; Michalk, Daniel; Nowaczyk, Norbert; Naranjo, Gildardo Gonzalez

    2009-10-01

    Rock cores of ~25 mm diameter are widely used in palaeomagnetism. Occasionally smaller diameters have been used as well which represents distinct advantages in terms of throughput, weight of equipment and core collections. How their orientation precision compares to 25 mm cores, however, has not been evaluated in detail before. Here we compare the site mean directions and their statistical parameters for 12 lava flows sampled with 25 mm cores (standard samples, typically 8 cores per site) and with 12 mm drill cores (mini-samples, typically 14 cores per site). The site-mean directions for both sample sizes appear to be indistinguishable in most cases. For the mini-samples, site dispersion parameters k on average are slightly lower than for the standard samples reflecting their larger orienting and measurement errors. Applying the Wilcoxon signed-rank test the probability that k or α95 have the same distribution for both sizes is acceptable only at the 17.4 or 66.3 per cent level, respectively. The larger mini-core numbers per site appears to outweigh the lower k values yielding also slightly smaller confidence limits α95. Further, both k and α95 are less variable for mini-samples than for standard size samples. This is interpreted also to result from the larger number of mini-samples per site, which better averages out the detrimental effect of undetected abnormal remanence directions. Sampling of volcanic rocks with mini-samples therefore does not present a disadvantage in terms of the overall obtainable uncertainty of site mean directions. Apart from this, mini-samples do present clear advantages during the field work, as about twice the number of drill cores can be recovered compared to 25 mm cores, and the sampled rock unit is then more widely covered, which reduces the contribution of natural random errors produced, for example, by fractures, cooling joints, and palaeofield inhomogeneities. Mini-samples may be processed faster in the laboratory, which is of

  7. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  8. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  9. Modular time division multiplexer: Efficient simultaneous characterization of fast and slow transients in multiple samples

    Science.gov (United States)

    Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.

    2016-09-01

    A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.

  10. The Larger Linear N-Heteroacenes

    KAUST Repository

    Bunz, Uwe H. F.

    2015-01-01

    © 2015 American Chemical Society. ConspectusThe close structural and chemical relationship of N-heteroacenes to pentacene suggests their broad applicability in organic electronic devices, such as thin-film transistors. The superb materials science properties of azaacenes result from their improved resistance toward oxidation and their potential for electron transport, both of which have been demonstrated recently. The introduction of nitrogen atoms into the aromatic perimeter of acenes stabilizes their frontier molecular orbitals and increases their electron affinity. The HOMO-LUMO gaps in azaacenes in which the nitrogen atoms are symmetrically placed are similar to those of the acenes. The judiciously placed nitrogen atoms induce an "umpolung" of the electronic behavior of these pentacene-like molecules, i.e., instead of hole mobility in thin-film transistors, azaacenes are electron-transporting materials. The fundamental synthetic approaches toward larger azaacenes are described and discussed. Several synthetic methodologies have been exploited, and some have been newly developed to assemble substituted azaacenes. The oldest methods are condensation-based. Aromatic o-diamines are coupled with o-dihydroxyarenes in the melt without solvent. This method works well for unsubstituted azaacenes only. The attachment of substituents to the starting materials renders these "fire and sword" methods less useful. The starting materials decompose under these conditions. The direct condensation of substituted o-diamines with o-quinones proceeds well in some cases. Fluorinated benzene rings next to a pyrazine unit are introduced by nucleophilic aromatic substitution employing hexafluorobenzene. However, with these well-established synthetic methodologies, a number of azaacene topologies cannot be synthesized. The Pd-catalyzed coupling of aromatic halides and aromatic diamines has therefore emerged as versatile tool for azaacene synthesis. Now substituted diaza- and

  11. The Larger Linear N-Heteroacenes

    KAUST Repository

    Bunz, Uwe H. F.

    2015-06-16

    © 2015 American Chemical Society. ConspectusThe close structural and chemical relationship of N-heteroacenes to pentacene suggests their broad applicability in organic electronic devices, such as thin-film transistors. The superb materials science properties of azaacenes result from their improved resistance toward oxidation and their potential for electron transport, both of which have been demonstrated recently. The introduction of nitrogen atoms into the aromatic perimeter of acenes stabilizes their frontier molecular orbitals and increases their electron affinity. The HOMO-LUMO gaps in azaacenes in which the nitrogen atoms are symmetrically placed are similar to those of the acenes. The judiciously placed nitrogen atoms induce an "umpolung" of the electronic behavior of these pentacene-like molecules, i.e., instead of hole mobility in thin-film transistors, azaacenes are electron-transporting materials. The fundamental synthetic approaches toward larger azaacenes are described and discussed. Several synthetic methodologies have been exploited, and some have been newly developed to assemble substituted azaacenes. The oldest methods are condensation-based. Aromatic o-diamines are coupled with o-dihydroxyarenes in the melt without solvent. This method works well for unsubstituted azaacenes only. The attachment of substituents to the starting materials renders these "fire and sword" methods less useful. The starting materials decompose under these conditions. The direct condensation of substituted o-diamines with o-quinones proceeds well in some cases. Fluorinated benzene rings next to a pyrazine unit are introduced by nucleophilic aromatic substitution employing hexafluorobenzene. However, with these well-established synthetic methodologies, a number of azaacene topologies cannot be synthesized. The Pd-catalyzed coupling of aromatic halides and aromatic diamines has therefore emerged as versatile tool for azaacene synthesis. Now substituted diaza- and

  12. A time-sorting pitfall trap and temperature datalogger for the sampling of surface-active arthropods

    Directory of Open Access Journals (Sweden)

    Marshall S. McMunn

    2017-04-01

    Full Text Available Nearly all arthropods display consistent patterns of activity according to time of day. These patterns of activity often limit the extent of animal co-occurrence in space and time. Quantifying when particular species are active and how activity varies with environmental conditions is difficult without the use of automated devices due to the need for continuous monitoring. Time-sorting pitfall traps passively collect active arthropods into containers with known beginning and end sample times. The trap described here, similar to previous designs, sorts arthropods by the time they fall into the trap using a rotating circular rack of vials. This trap represents a reduction in size, cost, and time of construction, while increasing the number of time windows sampled. The addition of temperature data collection extends functionality, while the use of store-bought components and inclusion of customizable software make the trap easy to reproduce and use.

  13. Sampling Methodologies for Epidemiologic Surveillance of Men Who Have Sex with Men and Transgender Women in Latin America: An Empiric Comparison of Convenience Sampling, Time Space Sampling, and Respondent Driven Sampling

    Science.gov (United States)

    Clark, J. L.; Konda, K. A.; Silva-Santisteban, A.; Peinado, J.; Lama, J. R.; Kusunoki, L.; Perez-Brumer, A.; Pun, M.; Cabello, R.; Sebastian, J. L.; Suarez-Ognio, L.; Sanchez, J.

    2014-01-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June–August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through TSS, and 127 through RDS. The TSS sample included the largest proportion of TW (30.7 %) and the lowest percentage of subjects who had previously participated in HIV/STI research (14.9 %). The prevalence of newly diagnosed HIV infection, according to participants’ self-reported previous HIV diagnosis, was highest among TSS recruits (17.9 %) compared with RDS (12.6 %) and CS (10.2 %). TSS identified diverse populations of MSM/TW with higher prevalences of HIV/STIs not accessed by other methods. PMID:24362754

  14. Sampling methodologies for epidemiologic surveillance of men who have sex with men and transgender women in Latin America: an empiric comparison of convenience sampling, time space sampling, and respondent driven sampling.

    Science.gov (United States)

    Clark, J L; Konda, K A; Silva-Santisteban, A; Peinado, J; Lama, J R; Kusunoki, L; Perez-Brumer, A; Pun, M; Cabello, R; Sebastian, J L; Suarez-Ognio, L; Sanchez, J

    2014-12-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June-August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through TSS, and 127 through RDS. The TSS sample included the largest proportion of TW (30.7 %) and the lowest percentage of subjects who had previously participated in HIV/STI research (14.9 %). The prevalence of newly diagnosed HIV infection, according to participants' self-reported previous HIV diagnosis, was highest among TSS recruits (17.9 %) compared with RDS (12.6 %) and CS (10.2 %). TSS identified diverse populations of MSM/TW with higher prevalences of HIV/STIs not accessed by other methods.

  15. Real-time PCR to supplement gold-standard culture-based detection of Legionella in environmental samples.

    Science.gov (United States)

    Collins, S; Jorgensen, F; Willis, C; Walker, J

    2015-10-01

    Culture remains the gold-standard for the enumeration of environmental Legionella. However, it has several drawbacks including long incubation and poor sensitivity, causing delays in response times to outbreaks of Legionnaires' disease. This study aimed to validate real-time PCR assays to quantify Legionella species (ssrA gene), Legionella pneumophila (mip gene) and Leg. pneumophila serogroup-1 (wzm gene) to support culture-based detection in a frontline public health laboratory. Each qPCR assay had 100% specificity, excellent sensitivity (5 GU/reaction) and reproducibility. Comparison of the assays to culture-based enumeration of Legionella from 200 environmental samples showed that they had a negative predictive value of 100%. Thirty eight samples were positive for Legionella species by culture and qPCR. One hundred samples were negative by both methods, whereas 62 samples were negative by culture but positive by qPCR. The average log10 increase between culture and qPCR for Legionella spp. and Leg. pneumophila was 0·72 (P = 0·0002) and 0·51 (P = 0·006), respectively. The qPCR assays can be conducted on the same 1 l water sample as culture thus can be used as a supplementary technique to screen out negative samples and allow more rapid indication of positive samples. The assay could prove informative in public health investigations to identify or rule out sources of Legionella as well as to specifically identify Leg. pneumophila serogroup 1 in a timely manner not possible with culture. © 2015 The Society for Applied Microbiology.

  16. Analysis of volatile organic compounds in compost samples: A potential tool to determine appropriate composting time.

    Science.gov (United States)

    Zhu, Fengxiang; Pan, Zaifa; Hong, Chunlai; Wang, Weiping; Chen, Xiaoyang; Xue, Zhiyong; Yao, Yanlai

    2016-12-01

    Changes in volatile organic compound contents in compost samples during pig manure composting were studied using a headspace, solid-phase micro-extraction method (HS-SPME) followed by gas chromatography with mass spectrometric detection (GC/MS). Parameters affecting the SPME procedure were optimized as follows: the coating was carbon molecular sieve/polydimethylsiloxane (CAR/PDMS) fiber, the temperature was 60°C and the time was 30min. Under these conditions, 87 compounds were identified from 17 composting samples. Most of the volatile components could only be detected before day 22. However, benzenes, alkanes and alkenes increased and eventually stabilized after day 22. Phenol and acid substances, which are important factors for compost quality, were almost undetectable on day 39 in natural compost (NC) samples and on day 13 in maggot-treated compost (MC) samples. Our results indicate that the approach can be effectively used to determine the composting times by analysis of volatile substances in compost samples. An appropriate composting time not only ensures the quality of compost and reduces the loss of composting material but also reduces the generation of hazardous substances. The appropriate composting times for MC and NC were approximately 22days and 40days, respectively, during the summer in Zhejiang. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Freeze core sampling to validate time-lapse resistivity monitoring of the hyporheic zone.

    Science.gov (United States)

    Toran, Laura; Hughes, Brian; Nyquist, Jonathan; Ryan, Robert

    2013-01-01

    A freeze core sampler was used to characterize hyporheic zone storage during a stream tracer test. The pore water from the frozen core showed tracer lingered in the hyporheic zone after the tracer had returned to background concentration in collocated well samples. These results confirmed evidence of lingering subsurface tracer seen in time-lapse electrical resistivity tomographs. The pore water exhibited brine exclusion (ion concentrations in ice lower than source water) in a sediment matrix, despite the fast freezing time. Although freeze core sampling provided qualitative evidence of lingering tracer, it proved difficult to quantify tracer concentration because the amount of brine exclusion during freezing could not be accurately determined. Nonetheless, the additional evidence for lingering tracer supports using time-lapse resistivity to detect regions of low fluid mobility within the hyporheic zone that can act as chemically reactive zones of importance in stream health. © 2012, The Author(s). GroundWater © 2012, National Ground Water Association.

  18. Collision cascades and sputtering induced by larger cluster ions

    International Nuclear Information System (INIS)

    Sigmund, P.

    1988-01-01

    Recent experimental work on larger cluster impact on solid surfaces suggests large deviations from the standard case of additive sputter yields both in the nuclear and electronic stopping regime. The paper concentrates on elastic collision cascades. In addition to very pronounced spike effects, two phenomena are pointed out that are specific to cluster bombardment. Multiple hits of cluster atoms on one and the same target atom may result in recoil atoms that move faster than the maximum recoil speed for monomer bombardment at the same projectile speed. This effect is important when the atomic mass of a beam atom is less than that of a target atom, M 1 2 . In the opposite case, M 1 >> M 2 , collisions between beam particles may accelerate some beam particles and slow down others. Some consequences are mentioned. Remarks on the nuclear stopping power of larger clusters and on electronic sputtering by cluster bombardment conclude the paper. 38 refs., 2 figs

  19. Dispersal, phenology and predicted abundance of the larger grain ...

    African Journals Online (AJOL)

    The phenology and dispersal of the larger grain borer (LGB) in Africa is described, and comparisons are made between prediction of LGB numbers from laboratory studies and predictions from multiple linear models derived from trapping data in the field. The models were developed in Mexico and Kenya, using ...

  20. Framing Health Messages for Adolescents: Should We Use Objective Time Periods, Temporal Benchmarks, or Both?

    Science.gov (United States)

    McKay, Michael T.; Cole, Jon C.; Sumnall, Harry R.; Goudie, Andrew J.

    2012-01-01

    Time perspective is a cognitive-motivational construct, which has been shown to be related to decision-making, motivation and adjustment. The majority of research into time perspective has been conducted in college students and/or general population samples. Focus groups were held as part of a larger investigation into the relationship between…

  1. 29 CFR 779.232 - Franchise or other arrangements which create a larger enterprise.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Franchise or other arrangements which create a larger... Apply; Enterprise Coverage Leased Departments, Franchise and Other Business Arrangements § 779.232 Franchise or other arrangements which create a larger enterprise. (a) In other instances, franchise...

  2. Burdigalian turbid water patch reef environment revealed by larger benthic foraminifera

    Science.gov (United States)

    Novak, V.; Renema, W.; Throughflow-project

    2012-04-01

    Ancient isolated patch reefs outcropping from siliciclastic sediments are a trademark for the Miocene carbonate deposits occurring in East Kalimantan, Indonesia. They develop in transitional shelf sediments deposited between deltaic and deep marine deposits (Allen and Chambers, 1998). The Batu Putih Limestone (Wilson, 2005) and similar outcrops in adjacent areas have been characterized as shallow water carbonates influenced by high siliciclastic input, showing low relief patch reefs in turbid waters. Larger benthic foraminifera (LBF) are excellent markers for biochronology and paleoenvironmental reconstruction. This study aims to reveal age and paleoenvironment of a shallow water carbonate patch reef developed in mixed depositional system by using LBF and microfacies analysis. The studied section is located near Bontang, East Kalimantan, and is approximately 80 m long and 12 m high. It is placed within Miocene sediments in the central part of the Kutai Basin. Patch reef and capping sediments were logged through eight transects along section and divided into nine different lithological units from which samples were collected. Thin sections and isolated specimens of larger benthic foraminifera were analyzed and recognized to species level (where possible) providing age and environmental information. Microfacies analysis of thin sections included carbonate classification (textural scheme of Dunham, 1962) and assemblage composition of LBF, algae and corals relative abundance. Three environmentally indicative groups of LBF were separated based on test morphology, habitat or living relatives (Hallock and Glenn, 1986). Analysed foraminifera assemblage suggests Burdigalian age (Tf1). With use of microfacies analysis nine successive lithological units were grouped into five facies types. Paleoenvironmental reconstruction of LBF fossil assemblage indicate two cycles of possible deepening recorded in the section. Based on high muddy matrix ratio in analyzed thin-sections we

  3. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  4. Sparse-sampling with time-encoded (TICO) stimulated Raman scattering for fast image acquisition

    Science.gov (United States)

    Hakert, Hubertus; Eibl, Matthias; Karpf, Sebastian; Huber, Robert

    2017-07-01

    Modern biomedical imaging modalities aim to provide researchers a multimodal contrast for a deeper insight into a specimen under investigation. A very promising technique is stimulated Raman scattering (SRS) microscopy, which can unveil the chemical composition of a sample with a very high specificity. Although the signal intensities are enhanced manifold to achieve a faster acquisition of images if compared to standard Raman microscopy, there is a trade-off between specificity and acquisition speed. Commonly used SRS concepts either probe only very few Raman transitions as the tuning of the applied laser sources is complicated or record whole spectra with a spectrometer based setup. While the first approach is fast, it reduces the specificity and the spectrometer approach records whole spectra -with energy differences where no Raman information is present-, which limits the acquisition speed. Therefore, we present a new approach based on the TICO-Raman concept, which we call sparse-sampling. The TICO-sparse-sampling setup is fully electronically controllable and allows probing of only the characteristic peaks of a Raman spectrum instead of always acquiring a whole spectrum. By reducing the spectral points to the relevant peaks, the acquisition time can be greatly reduced compared to a uniformly, equidistantly sampled Raman spectrum while the specificity and the signal to noise ratio (SNR) are maintained. Furthermore, all laser sources are completely fiber based. The synchronized detection enables a full resolution of the Raman signal, whereas the analogue and digital balancing allows shot noise limited detection. First imaging results with polystyrene (PS) and polymethylmethacrylate (PMMA) beads confirm the advantages of TICO sparse-sampling. We achieved a pixel dwell time as low as 35 μs for an image differentiating both species. The mechanical properties of the applied voice coil stage for scanning the sample currently limits even faster acquisition.

  5. [Research progress of larger flexion gap than extension gap in total knee arthroplasty].

    Science.gov (United States)

    Zhang, Weisong; Hao, Dingjun

    2017-05-01

    To summarize the progress of larger flexion gap than extension gap in total knee arthro-plasty (TKA). The domestic and foreign related literature about larger flexion gap than extension gap in TKA, and its impact factors, biomechanical and kinematic features, and clinical results were summarized. During TKA, to adjust the relations of flexion gap and extension gap is one of the key factors of successful operation. The biomechanical, kinematic, and clinical researches show that properly larger flexion gap than extension gap can improve both the postoperative knee range of motion and the satisfaction of patients, but does not affect the stability of the knee joint. However, there are also contrary findings. So adjustment of flexion gap and extension gap during TKA is still in dispute. Larger flexion gap than extension gap in TKA is a new joint space theory, and long-term clinical efficacy, operation skills, and related complications still need further study.

  6. Dynamics of acoustically levitated disk samples.

    Science.gov (United States)

    Xie, W J; Wei, B

    2004-10-01

    The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King's theory, and a larger force can be obtained for thin disks. When the disk aspect ratio gamma is larger than a critical value gamma(*) ( approximately 1.9 ) and the disk radius a is smaller than the critical value a(*) (gamma) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples ( gammaacoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval H(n) . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.

  7. Listing of nuclear power plant larger than 100 MWe

    International Nuclear Information System (INIS)

    McHugh, B.

    1975-06-01

    This report contains a list of all nuclear power plants larger than 100 MWe, printed out from the Argus Data Bank at Chalmers University of Technology in Sweden. The plants are listed alphabetically. The report contains also a plant ranking list, where the plants are listed by the load factor (12 months) (M.S.)

  8. Listing of nuclear power plant larger than 100 MWe

    International Nuclear Information System (INIS)

    McHugh, B.

    1975-12-01

    This report contains a list of all nuclear power plants larger than 100 MWe, printed out from the Argus Data Bank at Chalmers University of Technology in Sweden. The plants are listed by country. The report contains also a plant ranking list, where the plants are listed by the load factor (12 months). (M.S.)

  9. Dual Source Time-of-flight Mass Spectrometer and Sample Handling System

    Science.gov (United States)

    Brinckerhoff, W.; Mahaffy, P.; Cornish, T.; Cheng, A.; Gorevan, S.; Niemann, H.; Harpold, D.; Rafeek, S.; Yucht, D.

    We present details of an instrument under development for potential NASA missions to planets and small bodies. The instrument comprises a dual ionization source (laser and electron impact) time-of-flight mass spectrometer (TOF-MS) and a carousel sam- ple handling system for in situ analysis of solid materials acquired by, e.g., a coring drill. This DSTOF instrument could be deployed on a fixed lander or a rover, and has an open design that would accommodate measurements by additional instruments. The sample handling system (SHS) is based on a multi-well carousel, originally de- signed for Champollion/DS4. Solid samples, in the form of drill cores or as loose chips or fines, are inserted through an access port, sealed in vacuum, and transported around the carousel to a pyrolysis cell and/or directly to the TOF-MS inlet. Samples at the TOF-MS inlet are xy-addressable for laser or optical microprobe. Cups may be ejected from their holders for analyzing multiple samples or caching them for return. Samples are analyzed with laser desorption and evolved-gas/electron-impact sources. The dual ion source permits studies of elemental, isotopic, and molecular composition of unprepared samples with a single mass spectrometer. Pulsed laser desorption per- mits the measurement of abundance and isotope ratios of refractory elements, as well as the detection of high-mass organic molecules in solid samples. Evolved gas analysis permits similar measurements of the more volatile species in solids and aerosols. The TOF-MS is based on previous miniature prototypes at JHU/APL that feature high sensitivity and a wide mass range. The laser mode, in which the sample cup is directly below the TOF-MS inlet, permits both ablation and desorption measurements, to cover elemental and molecular species, respectively. In the evolved gas mode, sample cups are raised into a small pyrolysis cell and heated, producing a neutral gas that is elec- tron ionized and pulsed into the TOF-MS. (Any imaging

  10. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  11. Sampling inspection for the evaluation of time-dependent reliability of deteriorating systems under imperfect defect detection

    International Nuclear Information System (INIS)

    Kuniewski, Sebastian P.; Weide, Johannes A.M. van der; Noortwijk, Jan M. van

    2009-01-01

    The paper presents a sampling-inspection strategy for the evaluation of time-dependent reliability of deteriorating systems, where the deterioration is assumed to initiate at random times and at random locations. After initiation, defects are weakening the system's resistance. The system becomes unacceptable when at least one defect reaches a critical depth. The defects are assumed to initiate at random times modeled as event times of a non-homogeneous Poisson process (NHPP) and to develop according to a non-decreasing time-dependent gamma process. The intensity rate of the NHPP is assumed to be a combination of a known time-dependent shape function and an unknown proportionality constant. When sampling inspection (i.e. inspection of a selected subregion of the system) results in a number of defect initiations, Bayes' theorem can be used to update prior beliefs about the proportionality constant of the NHPP intensity rate to the posterior distribution. On the basis of a time- and space-dependent Poisson process for the defect initiation, an adaptive Bayesian model for sampling inspection is developed to determine the predictive probability distribution of the time to failure. A potential application is, for instance, the inspection of a large vessel or pipeline suffering pitting/localized corrosion in the oil industry. The possibility of imperfect defect detection is also incorporated in the model.

  12. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    Science.gov (United States)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  13. Designing an enhanced groundwater sample collection system

    International Nuclear Information System (INIS)

    Schalla, R.

    1994-10-01

    As part of an ongoing technical support mission to achieve excellence and efficiency in environmental restoration activities at the Laboratory for Energy and Health-Related Research (LEHR), Pacific Northwest Laboratory (PNL) provided guidance on the design and construction of monitoring wells and identified the most suitable type of groundwater sampling pump and accessories for monitoring wells. The goal was to utilize a monitoring well design that would allow for hydrologic testing and reduce turbidity to minimize the impact of sampling. The sampling results of the newly designed monitoring wells were clearly superior to those of the previously installed monitoring wells. The new wells exhibited reduced turbidity, in addition to improved access for instrumentation and hydrologic testing. The variable frequency submersible pump was selected as the best choice for obtaining groundwater samples. The literature references are listed at the end of this report. Despite some initial difficulties, the actual performance of the variable frequency, submersible pump and its accessories was effective in reducing sampling time and labor costs, and its ease of use was preferred over the previously used bladder pumps. The surface seals system, called the Dedicator, proved to be useful accessory to prevent surface contamination while providing easy access for water-level measurements and for connecting the pump. Cost savings resulted from the use of the pre-production pumps (beta units) donated by the manufacturer for the demonstration. However, larger savings resulted from shortened field time due to the ease in using the submersible pumps and the surface seal access system. Proper deployment of the monitoring wells also resulted in cost savings and ensured representative samples

  14. Dependability of Data Derived from Time Sampling Methods with Multiple Observation Targets

    Science.gov (United States)

    Johnson, Austin H.; Chafouleas, Sandra M.; Briesch, Amy M.

    2017-01-01

    In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in…

  15. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  16. Larger fig wasps are more careful about which figs to enter--with good reason.

    Science.gov (United States)

    Liu, Cong; Yang, Da-Rong; Compton, Stephen G; Peng, Yan-Qiong

    2013-01-01

    Floral longevity reflects a balance between gains in pollinator visitation and the costs of flower maintenance. Because rewards to pollinators change over time, older flowers may be less attractive, reducing the value of extended longevity. Un-pollinated figs, the inflorescences of Ficus species, can remain receptive for long periods, but figs that are older when entered by their host-specific fig wasp pollinators produce fewer seeds and fig wasp offspring. Our field experiments with Ficushispida, a dioecious fig tree, examined how the length of time that receptive figs have remained un-pollinated influences the behaviour and reproductive success of its short-lived fig wasp pollinator, Ceratosolensolmsi marchali. The results were consistent in three different seasons, and on male and female trees, although receptivity was greatly extended during colder months. Pollinators took longer to find the ostioles of older figs, and longer to penetrate them. They also became increasingly unwilling to enter figs as they aged, and increasing numbers of the wasps became trapped in the ostiolar bracts. Larger individuals were particularly unwilling to enter older figs, resulting in older figs being pollinated by smaller wasps. On female trees, where figs produce only seeds, seed production declined rapidly with fig age. On male trees, the numbers and size of fig wasp offspring declined, and a higher proportion were male. Older male figs are harder to enter, especially for larger individuals, and offer poorer quality oviposition opportunities. This study opens an interesting new perspective on the coevolution of figs and their pollinators, especially factors influencing pollinator body size and emphasises the subtleties of interactions between mutualists.

  17. More 'altruistic' punishment in larger societies.

    Science.gov (United States)

    Marlowe, Frank W; Berbesque, J Colette

    2008-03-07

    If individuals will cooperate with cooperators, and punish non-cooperators even at a cost to themselves, then this strong reciprocity could minimize the cheating that undermines cooperation. Based upon numerous economic experiments, some have proposed that human cooperation is explained by strong reciprocity and norm enforcement. Second-party punishment is when you punish someone who defected on you; third-party punishment is when you punish someone who defected on someone else. Third-party punishment is an effective way to enforce the norms of strong reciprocity and promote cooperation. Here we present new results that expand on a previous report from a large cross-cultural project. This project has already shown that there is considerable cross-cultural variation in punishment and cooperation. Here we test the hypothesis that population size (and complexity) predicts the level of third-party punishment. Our results show that people in larger, more complex societies engage in significantly more third-party punishment than people in small-scale societies.

  18. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  19. Effects of brief time delays on matching-to-sample abilities in capuchin monkeys (Sapajus spp.).

    Science.gov (United States)

    Truppa, Valentina; De Simone, Diego Antonio; Piano Mortari, Eva; De Lillo, Carlo

    2014-09-01

    Traditionally, studies of delayed matching-to-sample (DMTS) tasks in nonhuman species have focused on the assessment of the limits of the retrieval of information stored in short- and long-term memory systems. However, it is still unclear if visual recognition in these tasks is affected by very brief delay intervals, which are typically used to study rapidly decaying types of visual memory. This study aimed at evaluating if tufted capuchin monkeys' ability to recognise visual stimuli in a DMTS task is affected by (i) the disappearance of the sample stimulus and (ii) the introduction of delay intervals (0.5, 1.0, 2.0 and 3.0s) between the disappearance of the sample and the presentation of the comparison stimuli. The results demonstrated that the simple disappearance of the sample and the introduction of a delay of 0.5s did not affect capuchins' performance either in terms of accuracy or response time. A delay interval of 1.0s produced a significant increase in response time but still did not affect recognition accuracy. By contrast, delays of 2.0 and 3.0s determined a significant increase in response time and a reduction in recognition accuracy. These findings indicate the existence in capuchin monkeys of processes enabling a very accurate retention of stimulus features within time frames comparable to those reported for humans' sensory memory (0.5-1.0s). The extent to which such processes can be considered analogous to the sensory memory processes observed in human visual cognition is discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. One-sample determination of glomerular filtration rate (GFR) in children. An evaluation based on 75 consecutive patients

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Lütken; Kanstrup, Inge-Lis; Henriksen, Jens Henrik Sahl

    2013-01-01

    the plasma radioactivity curve. The one-sample clearance was determined from a single plasma sample collected at 60, 90 or 120 min after injection according to the one-pool method. Results. The overall accuracy of one-sample clearance was excellent with mean numeric difference to the reference value of 0.......7-1.7 mL/min. In 64 children, the one-sample clearance was within ± 4 mL/min of the multiple-sample value. However, in 11 children the numeric difference exceeded 4 mL/min (4.4-19.5). Analysis of age, body size, distribution volume, indicator retention time, clearance level, curve fitting, and sampling...... fraction (15%) larger discrepancies are found. If an accurate clearance value is essential a multiple-sample determination should be performed....

  1. Experimental performance evaluation of two stack sampling systems in a plutonium facility

    International Nuclear Information System (INIS)

    Glissmeyer, J.A.

    1992-04-01

    The evaluation of two routine stack sampling systems at the Z-Plant plutonium facility operated by Rockwell International for USERDA is part of a larger study, sponsored by Rockwell and conducted by Battelle, Pacific Northwest Laboratories, of gaseous effluent sampling systems. The gaseous effluent sampling systems evaluated are located at the main plant ventilation stack (291-Z-1) and at a vessel vent stack (296-Z-3). A preliminary report, which was a paper study issued in April 1976, identified many deficiencies in the existing sampling systems and made recommendations for corrective action. The objectives of this experimental evaluation of those sampling systems were as follows: Characterize the radioactive aerosols in the stack effluents; Develop a tracer aerosol technique for validating particulate effluent sampling system performance; Evaluate the performance of the existing routine sampling systems and their compliance with the sponsor's criteria; and Recommend corrective action where required. The tracer aerosol approach to sampler evaluation was chosen because the low concentrations of radioactive particulates in the effluents would otherwise require much longer sampling times and thus more time to complete this evaluation. The following report describes the sampling systems that are the subject of this study and then details the experiments performed. The results are then presented and discussed. Much of the raw and finished data are included in the appendices

  2. Development of a real-time multiplex PCR assay for the detection of multiple Salmonella serotypes in chicken samples

    Directory of Open Access Journals (Sweden)

    Whyte Paul

    2008-09-01

    Full Text Available Abstract Background A real-time multiplex PCR assay was developed for the detection of multiple Salmonella serotypes in chicken samples. Poultry-associated serotypes detected in the assay include Enteritidis, Gallinarum, Typhimurium, Kentucky and Dublin. The traditional cultural method according to EN ISO 6579:2002 for the detection of Salmonella in food was performed in parallel. The real-time PCR based method comprised a pre-enrichment step in Buffered Peptone Water (BPW overnight, followed by a shortened selective enrichment in Rappaport Vasilliadis Soya Broth (RVS for 6 hours and subsequent DNA extraction. Results The real-time multiplex PCR assay and traditional cultural method showed 100% inclusivity and 100% exclusivity on all strains tested. The real-time multiplex PCR assay was as sensitive as the traditional cultural method in detecting Salmonella in artificially contaminated chicken samples and correctly identified the serotype. Artificially contaminated chicken samples resulted in a detection limit of between 1 and 10 CFU per 25 g sample for both methods. A total of sixty-three naturally contaminated chicken samples were investigated by both methods and relative accuracy, relative sensitivity and relative specificity of the real-time PCR method were determined to be 89, 94 and 87%, respectively. Thirty cultures blind tested were correctly identified by the real-time multiplex PCR method. Conclusion Real-time PCR methodology can contribute to meet the need for rapid identification and detection methods in food testing laboratories.

  3. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  4. Uranium-233 analysis of biological samples

    International Nuclear Information System (INIS)

    Gies, R.A.; Ballou, J.E.; Case, A.C.

    1979-01-01

    Two liquid scintillation techniques were compared for 233 U analysis: a two-phase extraction system (D2EHPA) developed by Keough and Powers, 1970, for Pu analysis; and a single-phase emulsion system (TT21) that holds the total sample in suspension with the scintillator. The first system (D2EHPA) was superior in reducing background (two- to threefold) and in accommodating a larger sample volume (fivefold). Samples containing > 50 mg/ml of slats were not extracted quantitatively by D2EHPA

  5. Laser-induced breakdown spectroscopy for the real-time analysis of mixed waste samples containing Sr

    International Nuclear Information System (INIS)

    Barefield, J.E. II; Koskelo, A.C.; Multari, R.A.; Cremers, D.A.; Gamble, T.K.; Han, C.Y.

    1995-01-01

    In this report, the use of Laser-induced breakdown spectroscopy to analyze mixed waste samples containing Sr is discussed. The mixed waste samples investigated include vitrified waste glass and contaminated soil. Compared to traditional analysis techniques, the laser-based method is fast (i.e., analysis times on the order of minutes) and essentially waste free since little or no sample preparation is required. Detection limits on the order of pmm Sr were determined. Detection limits obtained using a fiber optic cable to deliver laser pulses to soil samples containing Cr, Zr, Pb, Be, Cu, and Ni will also be discussed

  6. A contribution to radiotherapy of the larger-celled bronchial carcinoma

    International Nuclear Information System (INIS)

    Zoubie, I.

    1982-01-01

    This work consists of a retrospective definition of disease courses of 859 patients with lung tumors and the definition of the survival curves in their dependence on histology, radiation dose and sex. With 721 larger-celled bronchial carcinomas the ratio of men to women was 12:1. The age peak lay between 60 and 70 years. The one/five year survival rate of all included larger-celled bronchial carcinomas (n=701) was, independent from the therapy form, 35.7, resp. 4.78%. The one year/five year survival rates were for the squamous epithelia 31.08/0.58%, for the undifferentiated carcinomas 25.34/3.41%, and for the lung tumors without histology 35.4/5.14%. Lobectomized patients with squamous epithelium carcinoma had in comparison to pneumonectomized patients a clearly higher survival chance. A clearly sex-dependent predisposition for a certain type of carcinoma was not present. (TRV) [de

  7. Development of a real-time PCR to detect Demodex canis DNA in different tissue samples.

    Science.gov (United States)

    Ravera, Ivan; Altet, Laura; Francino, Olga; Bardagí, Mar; Sánchez, Armand; Ferrer, Lluís

    2011-02-01

    The present study reports the development of a real-time polymerase chain reaction (PCR) to detect Demodex canis DNA on different tissue samples. The technique amplifies a 166 bp of D. canis chitin synthase gene (AB 080667) and it has been successfully tested on hairs extracted with their roots and on formalin-fixed paraffin embedded skin biopsies. The real-time PCR amplified on the hairs of all 14 dogs with a firm diagnosis of demodicosis and consistently failed to amplify on negative controls. Eleven of 12 skin biopsies with a morphologic diagnosis of canine demodicosis were also positive. Sampling hairs on two skin points (lateral face and interdigital skin), D. canis DNA was detected on nine of 51 healthy dogs (17.6%) a much higher percentage than previously reported with microscopic studies. Furthermore, it is foreseen that if the number of samples were increased, the percentage of positive dogs would probably also grow. Moreover, in four of the six dogs with demodicosis, the samples taken from non-lesioned skin were positive. This finding, if confirmed in further studies, suggests that demodicosis is a generalized phenomenon in canine skin, due to proliferation of local mite populations, even though macroscopic lesions only appear in certain areas. The real-time PCR technique to detect D. canis DNA described in this work is a useful tool to advance our understanding of canine demodicosis.

  8. Magnetic nanoparticles formed in glasses co-doped with iron and larger radius elements

    Energy Technology Data Exchange (ETDEWEB)

    Edelman, I.; Ivanova, O.; Ivantsov, R.; Velikanov, D.; Zabluda, V. [L.V. Kirensky Institute of Physics SB RAS, 660036 Krasnoyarsk (Russian Federation); Zubavichus, Y.; Veligzhanin, A. [NRC ' Kurchatov Institute,' 123182 Moscow (Russian Federation); Zaikovskiy, V. [Boreskov Institute of Catalysis, Siberian Branch of RAS, 630090 Novosibirsk (Russian Federation); Stepanov, S. [S.I. Vavilov State Optical Institute, St. Petersburg (Russian Federation); Artemenko, A. [ICMCB, UPR CNRS 9048, 33608 Pessac cedex (France); Curely, J.; Kliava, J. [LOMA, UMR 5798 Universite Bordeaux 1-CNRS, 33405 Talence cedex (France)

    2012-10-15

    A new type of nanoparticle-containing glasses based on borate glasses co-doped with low contents of iron and larger radius elements, Dy, Tb, Gd, Ho, Er, Y, and Bi, is studied. Heat treatment of these glasses results in formation of magnetic nanoparticles, radically changing their physical properties. Transmission electron microscopy and synchrotron radiation-based techniques: x-ray diffraction, extended x-ray absorption fine structure, x-ray absorption near-edge structure, and small-angle x-ray scattering, show a broad distribution of nanoparticle sizes with characteristics depending on the treatment regime; a crystalline structure of these nanoparticles is detected in heat treated samples. Magnetic circular dichroism (MCD) studies of samples subjected to heat treatment as well as of maghemite, magnetite, and iron garnet allow to unambiguously assign the nanoparticle structure to maghemite, independently of co-dopant nature and of heat treatment regime used. Different features observed in the MCD spectra are related to different electron transitions in Fe{sup 3+} ions gathered in the nanoparticles. The static magnetization in heat treated samples has non-linear dependence on the magnetizing field with hysteresis. Zero-field cooled magnetization curves show that at higher temperatures the nanoparticles occur in superparamagnetic state with blocking temperatures above 100 K. Below ca. 20 K, a considerable contribution to both zero field-cooled and field-cooled magnetizations occurs from diluted paramagnetic ions. Variable-temperature electron magnetic resonance (EMR) studies unambiguously show that in as-prepared glasses paramagnetic ions are in diluted state and confirm the formation of magnetic nanoparticles already at earlier stages of heat treatment. Computer simulations of the EMR spectra corroborate the broad distribution of nanoparticle sizes found by 'direct' techniques as well as superparamagnetic nanoparticle behaviour demonstrated in the

  9. Magnetic nanoparticles formed in glasses co-doped with iron and larger radius elements

    International Nuclear Information System (INIS)

    Edelman, I.; Ivanova, O.; Ivantsov, R.; Velikanov, D.; Zabluda, V.; Zubavichus, Y.; Veligzhanin, A.; Zaikovskiy, V.; Stepanov, S.; Artemenko, A.; Curély, J.; Kliava, J.

    2012-01-01

    A new type of nanoparticle-containing glasses based on borate glasses co-doped with low contents of iron and larger radius elements, Dy, Tb, Gd, Ho, Er, Y, and Bi, is studied. Heat treatment of these glasses results in formation of magnetic nanoparticles, radically changing their physical properties. Transmission electron microscopy and synchrotron radiation-based techniques: x-ray diffraction, extended x-ray absorption fine structure, x-ray absorption near-edge structure, and small-angle x-ray scattering, show a broad distribution of nanoparticle sizes with characteristics depending on the treatment regime; a crystalline structure of these nanoparticles is detected in heat treated samples. Magnetic circular dichroism (MCD) studies of samples subjected to heat treatment as well as of maghemite, magnetite, and iron garnet allow to unambiguously assign the nanoparticle structure to maghemite, independently of co-dopant nature and of heat treatment regime used. Different features observed in the MCD spectra are related to different electron transitions in Fe 3+ ions gathered in the nanoparticles. The static magnetization in heat treated samples has non-linear dependence on the magnetizing field with hysteresis. Zero-field cooled magnetization curves show that at higher temperatures the nanoparticles occur in superparamagnetic state with blocking temperatures above 100 K. Below ca. 20 K, a considerable contribution to both zero field-cooled and field-cooled magnetizations occurs from diluted paramagnetic ions. Variable-temperature electron magnetic resonance (EMR) studies unambiguously show that in as-prepared glasses paramagnetic ions are in diluted state and confirm the formation of magnetic nanoparticles already at earlier stages of heat treatment. Computer simulations of the EMR spectra corroborate the broad distribution of nanoparticle sizes found by “direct” techniques as well as superparamagnetic nanoparticle behaviour demonstrated in the magnetization

  10. Magnetic nanoparticles formed in glasses co-doped with iron and larger radius elements

    Science.gov (United States)

    Edelman, I.; Ivanova, O.; Ivantsov, R.; Velikanov, D.; Zabluda, V.; Zubavichus, Y.; Veligzhanin, A.; Zaikovskiy, V.; Stepanov, S.; Artemenko, A.; Curély, J.; Kliava, J.

    2012-10-01

    A new type of nanoparticle-containing glasses based on borate glasses co-doped with low contents of iron and larger radius elements, Dy, Tb, Gd, Ho, Er, Y, and Bi, is studied. Heat treatment of these glasses results in formation of magnetic nanoparticles, radically changing their physical properties. Transmission electron microscopy and synchrotron radiation-based techniques: x-ray diffraction, extended x-ray absorption fine structure, x-ray absorption near-edge structure, and small-angle x-ray scattering, show a broad distribution of nanoparticle sizes with characteristics depending on the treatment regime; a crystalline structure of these nanoparticles is detected in heat treated samples. Magnetic circular dichroism (MCD) studies of samples subjected to heat treatment as well as of maghemite, magnetite, and iron garnet allow to unambiguously assign the nanoparticle structure to maghemite, independently of co-dopant nature and of heat treatment regime used. Different features observed in the MCD spectra are related to different electron transitions in Fe3+ ions gathered in the nanoparticles. The static magnetization in heat treated samples has non-linear dependence on the magnetizing field with hysteresis. Zero-field cooled magnetization curves show that at higher temperatures the nanoparticles occur in superparamagnetic state with blocking temperatures above 100 K. Below ca. 20 K, a considerable contribution to both zero field-cooled and field-cooled magnetizations occurs from diluted paramagnetic ions. Variable-temperature electron magnetic resonance (EMR) studies unambiguously show that in as-prepared glasses paramagnetic ions are in diluted state and confirm the formation of magnetic nanoparticles already at earlier stages of heat treatment. Computer simulations of the EMR spectra corroborate the broad distribution of nanoparticle sizes found by "direct" techniques as well as superparamagnetic nanoparticle behaviour demonstrated in the magnetization studies.

  11. Proteomic Biomarker Discovery in 1000 Human Plasma Samples with Mass Spectrometry.

    Science.gov (United States)

    Cominetti, Ornella; Núñez Galindo, Antonio; Corthésy, John; Oller Moreno, Sergio; Irincheeva, Irina; Valsesia, Armand; Astrup, Arne; Saris, Wim H M; Hager, Jörg; Kussmann, Martin; Dayon, Loïc

    2016-02-05

    The overall impact of proteomics on clinical research and its translation has lagged behind expectations. One recognized caveat is the limited size (subject numbers) of (pre)clinical studies performed at the discovery stage, the findings of which fail to be replicated in larger verification/validation trials. Compromised study designs and insufficient statistical power are consequences of the to-date still limited capacity of mass spectrometry (MS)-based workflows to handle large numbers of samples in a realistic time frame, while delivering comprehensive proteome coverages. We developed a highly automated proteomic biomarker discovery workflow. Herein, we have applied this approach to analyze 1000 plasma samples from the multicentered human dietary intervention study "DiOGenes". Study design, sample randomization, tracking, and logistics were the foundations of our large-scale study. We checked the quality of the MS data and provided descriptive statistics. The data set was interrogated for proteins with most stable expression levels in that set of plasma samples. We evaluated standard clinical variables that typically impact forthcoming results and assessed body mass index-associated and gender-specific proteins at two time points. We demonstrate that analyzing a large number of human plasma samples for biomarker discovery with MS using isobaric tagging is feasible, providing robust and consistent biological results.

  12. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  13. Pumping time required to obtain tube well water samples with aquifer characteristic radon concentrations

    International Nuclear Information System (INIS)

    Ricardo, Carla Pereira; Oliveira, Arno Heeren de

    2011-01-01

    Radon is an inert noble gas, which comes from the natural radioactive decay of uranium and thorium in soil, rock and water. Radon isotopes emanated from radium-bearing grains of a rock or soil are released into the pore space. Radon that reaches the pore space is partitioned between the gaseous and aqueous phases. Thus, the groundwater presents a radon signature from the rock that is characteristic of the aquifer. The characteristic radon concentration of an aquifer, which is mainly related to the emanation, is also influenced by the degree of subsurface degassing, especially in the vicinity of a tube well, where the radon concentration is strongly reduced. Looking for the required pumping time to take a tube well water sample that presents the characteristic radon concentration of the aquifer, an experiment was conducted in an 80 m deep tube well. In this experiment, after twenty-four hours without extraction, water samples were collected periodically, about ten minutes intervals, during two hours of pumping time. The radon concentrations of the samples were determined by using the RAD7 Electronic Radon Detector from Durridge Company, a solid state alpha spectrometric detector. It was realized that the necessary time to reach the maximum radon concentration, that means the characteristic radon concentration of the aquifer, is about sixty minutes. (author)

  14. A time-sorting pitfall trap and temperature datalogger for the sampling of surface-active arthropods

    OpenAIRE

    McMunn, Marshall S.

    2017-01-01

    Nearly all arthropods display consistent patterns of activity according to time of day. These patterns of activity often limit the extent of animal co-occurrence in space and time. Quantifying when particular species are active and how activity varies with environmental conditions is difficult without the use of automated devices due to the need for continuous monitoring. Time-sorting pitfall traps passively collect active arthropods into containers with known beginning and end sample times. ...

  15. Investigation of Legionella Contamination in Bath Water Samples by Culture, Amoebic Co-Culture, and Real-Time Quantitative PCR Methods.

    Science.gov (United States)

    Edagawa, Akiko; Kimura, Akio; Kawabuchi-Kurata, Takako; Adachi, Shinichi; Furuhata, Katsunori; Miyamoto, Hiroshi

    2015-10-19

    We investigated Legionella contamination in bath water samples, collected from 68 bathing facilities in Japan, by culture, culture with amoebic co-culture, real-time quantitative PCR (qPCR), and real-time qPCR with amoebic co-culture. Using the conventional culture method, Legionella pneumophila was detected in 11 samples (11/68, 16.2%). Contrary to our expectation, the culture method with the amoebic co-culture technique did not increase the detection rate of Legionella (4/68, 5.9%). In contrast, a combination of the amoebic co-culture technique followed by qPCR successfully increased the detection rate (57/68, 83.8%) compared with real-time qPCR alone (46/68, 67.6%). Using real-time qPCR after culture with amoebic co-culture, more than 10-fold higher bacterial numbers were observed in 30 samples (30/68, 44.1%) compared with the same samples without co-culture. On the other hand, higher bacterial numbers were not observed after propagation by amoebae in 32 samples (32/68, 47.1%). Legionella was not detected in the remaining six samples (6/68, 8.8%), irrespective of the method. These results suggest that application of the amoebic co-culture technique prior to real-time qPCR may be useful for the sensitive detection of Legionella from bath water samples. Furthermore, a combination of amoebic co-culture and real-time qPCR might be useful to detect viable and virulent Legionella because their ability to invade and multiply within free-living amoebae is considered to correlate with their pathogenicity for humans. This is the first report evaluating the efficacy of the amoebic co-culture technique for detecting Legionella in bath water samples.

  16. Identification of driving network of cellular differentiation from single sample time course gene expression data

    Science.gov (United States)

    Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing

    Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.

  17. High-pressure oxygenation of thin-wall YBCO single-domain samples

    International Nuclear Information System (INIS)

    Chaud, X; Savchuk, Y; Sergienko, N; Prikhna, T; Diko, P

    2008-01-01

    The oxygen annealing of ReBCO bulk material, necessary to achieve superconducting properties, usually induces micro- and macro-cracks. This leads to a crack-assisted oxygenation process that allows oxygenating large bulk samples faster than single crystals. But excellent superconducting properties are cancelled by the poor mechanical ones. More progressive oxygenation strategy has been shown to reduce drastically the oxygenation cracks. The problem then arises to keep a reasonable annealing time. The concept of bulk Y123 single-domain samples with thin-wall geometry has been introduced to bypass the inherent limitation due to a slow oxygen diffusion rate. But it is not enough. The use of a high oxygen pressure (16 MPa) enables to speed up further the process. It introduces a displacement in the equilibrium phase diagram towards higher temperatures, i.e., higher diffusion rates, to achieve a given oxygen content in the material. Remarkable results were obtained by applying such a high pressure oxygen annealing process on thin-wall single-domain samples. The trapped field of 16 mm diameter Y123 thin-wall single-domain samples was doubled (0.6T vs 0.3T at 77K) using an annealing time twice shorter (about 3 days). The initial development was made on thin bars. The advantage of thin-wall geometry is that such an annealing can be applied directly to a much larger sample

  18. Influenza virus drug resistance: a time-sampled population genetics perspective.

    Directory of Open Access Journals (Sweden)

    Matthieu Foll

    2014-02-01

    Full Text Available The challenge of distinguishing genetic drift from selection remains a central focus of population genetics. Time-sampled data may provide a powerful tool for distinguishing these processes, and we here propose approximate Bayesian, maximum likelihood, and analytical methods for the inference of demography and selection from time course data. Utilizing these novel statistical and computational tools, we evaluate whole-genome datasets of an influenza A H1N1 strain in the presence and absence of oseltamivir (an inhibitor of neuraminidase collected at thirteen time points. Results reveal a striking consistency amongst the three estimation procedures developed, showing strongly increased selection pressure in the presence of drug treatment. Importantly, these approaches re-identify the known oseltamivir resistance site, successfully validating the approaches used. Enticingly, a number of previously unknown variants have also been identified as being positively selected. Results are interpreted in the light of Fisher's Geometric Model, allowing for a quantification of the increased distance to optimum exerted by the presence of drug, and theoretical predictions regarding the distribution of beneficial fitness effects of contending mutations are empirically tested. Further, given the fit to expectations of the Geometric Model, results suggest the ability to predict certain aspects of viral evolution in response to changing host environments and novel selective pressures.

  19. Neutron moderation in a bulk sample and its effects on PGNAA setup geometry

    International Nuclear Information System (INIS)

    Al-Jarallah, M.I.; Naqvi, A.A.; Fazal-ur-Rehman,; Maselehuddin, M.; Abu-Jarad, F.; Raashid, M.

    2003-01-01

    In a prompt gamma ray neutron activation analysis (PGNAA) setup, the neutron moderation in the bulk sample also plays a key role. This can even dominate the thermalization effects of the external moderator in some cases. In order to study the neutron moderation effect in the bulk sample, moderators with two different sizes of the sample were tested at the King Fahd University of Petroleum and Minerals (KFUPM) PGNAA facility. In these tests, the thermal neutron relative intensity and prompt gamma ray yield from the two moderators were measured using nuclear track detectors (NTDs) and NaI detector, respectively. As predicted by Monte Carlo simulations, the measured intensity of thermal neutron inside the large sample cavity due to the external moderator was smaller than that from the smaller sample cavity. Due to its larger size, additional thermalization of neutrons will take place in the larger sample. In spite of smaller thermal neutron yield from the external moderator at the large sample location, higher yield of the prompt gamma ray was observed as compared to that from the smaller sample. This confirms the significance of neutron moderation effects in the bulk sample and can thereby affect the PGNAA geometry size. This allows larger samples in conjunction with smaller moderators in the PGNAA setup

  20. Neutron moderation in a bulk sample and its effects on PGNAA setup geometry

    Energy Technology Data Exchange (ETDEWEB)

    Al-Jarallah, M.I. E-mail: mibrahim@kfupm.edu.sa; Naqvi, A.A.; Fazal-ur-Rehman,; Maselehuddin, M.; Abu-Jarad, F.; Raashid, M

    2003-06-01

    In a prompt gamma ray neutron activation analysis (PGNAA) setup, the neutron moderation in the bulk sample also plays a key role. This can even dominate the thermalization effects of the external moderator in some cases. In order to study the neutron moderation effect in the bulk sample, moderators with two different sizes of the sample were tested at the King Fahd University of Petroleum and Minerals (KFUPM) PGNAA facility. In these tests, the thermal neutron relative intensity and prompt gamma ray yield from the two moderators were measured using nuclear track detectors (NTDs) and NaI detector, respectively. As predicted by Monte Carlo simulations, the measured intensity of thermal neutron inside the large sample cavity due to the external moderator was smaller than that from the smaller sample cavity. Due to its larger size, additional thermalization of neutrons will take place in the larger sample. In spite of smaller thermal neutron yield from the external moderator at the large sample location, higher yield of the prompt gamma ray was observed as compared to that from the smaller sample. This confirms the significance of neutron moderation effects in the bulk sample and can thereby affect the PGNAA geometry size. This allows larger samples in conjunction with smaller moderators in the PGNAA setup.

  1. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy.

    Science.gov (United States)

    Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.

  2. Observer-based output feedback control of networked control systems with non-uniform sampling and time-varying delay

    Science.gov (United States)

    Meng, Su; Chen, Jie; Sun, Jian

    2017-10-01

    This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.

  3. Sensitive time-resolved fluoroimmunoassay for quantitative determination of clothianidin in agricultural samples.

    Science.gov (United States)

    Li, Ming; Sheng, Enze; Yuan, Yulong; Liu, Xiaofeng; Hua, Xiude; Wang, Minghua

    2014-05-01

    Europium (Eu(3+))-labeled antibody was used as a fluorescent label to develop a highly sensitive time-resolved fluoroimmunoassay (TRFIA) for determination of clothianidin residues in agricultural samples. Toward this goal, the Eu(3+)-labeled polyclonal antibody and goat anti-rabbit antibody were prepared for developing and evaluating direct competitive TRFIA (dc-TRFIA) and indirect competitive TRFIA (ic-TRFIA). Under optimal conditions, the half-maximal inhibition concentration (IC50) and the limit of detection (LOD, IC10) of clothianidin were 9.20 and 0.0909 μg/L for the dc-TRFIA and 2.07 and 0.0220 μg/L for the ic-TRFIA, respectively. The ic-TRFIA has no obvious cross-reactivity with the analogues of clothianidin except for dinotefuran. The average recoveries of clothianidin from spiked water, soil, cabbage, and rice samples were estimated to range from 74.1 to 115.9 %, with relative standard deviations of 3.3 to 11.7 %. The results of TRFIA for the blind samples were largely consistent with gas chromatography (R (2) = 0.9902). The optimized ic-TRFIA might become a sensitive and satisfactory analytical method for the quantitative monitoring of clothianidin residues in agricultural samples.

  4. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Directory of Open Access Journals (Sweden)

    Q. Zhang

    2018-02-01

    Full Text Available River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1 fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2 the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling – in the form of spectral slope (β or other equivalent scaling parameters (e.g., Hurst exponent – are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1 they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β  =  0 to Brown noise (β  =  2 and (2 their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb–Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among

  5. Sample processing, protocol, and statistical analysis of the time-of-flight secondary ion mass spectrometry (ToF-SIMS) of protein, cell, and tissue samples.

    Science.gov (United States)

    Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia

    2014-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.

  6. Investigation of Legionella Contamination in Bath Water Samples by Culture, Amoebic Co-Culture, and Real-Time Quantitative PCR Methods

    Directory of Open Access Journals (Sweden)

    Akiko Edagawa

    2015-10-01

    Full Text Available We investigated Legionella contamination in bath water samples, collected from 68 bathing facilities in Japan, by culture, culture with amoebic co-culture, real-time quantitative PCR (qPCR, and real-time qPCR with amoebic co-culture. Using the conventional culture method, Legionella pneumophila was detected in 11 samples (11/68, 16.2%. Contrary to our expectation, the culture method with the amoebic co-culture technique did not increase the detection rate of Legionella (4/68, 5.9%. In contrast, a combination of the amoebic co-culture technique followed by qPCR successfully increased the detection rate (57/68, 83.8% compared with real-time qPCR alone (46/68, 67.6%. Using real-time qPCR after culture with amoebic co-culture, more than 10-fold higher bacterial numbers were observed in 30 samples (30/68, 44.1% compared with the same samples without co-culture. On the other hand, higher bacterial numbers were not observed after propagation by amoebae in 32 samples (32/68, 47.1%. Legionella was not detected in the remaining six samples (6/68, 8.8%, irrespective of the method. These results suggest that application of the amoebic co-culture technique prior to real-time qPCR may be useful for the sensitive detection of Legionella from bath water samples. Furthermore, a combination of amoebic co-culture and real-time qPCR might be useful to detect viable and virulent Legionella because their ability to invade and multiply within free-living amoebae is considered to correlate with their pathogenicity for humans. This is the first report evaluating the efficacy of the amoebic co-culture technique for detecting Legionella in bath water samples.

  7. Limited sampling strategy for determining metformin area under the plasma concentration-time curve

    DEFF Research Database (Denmark)

    Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José

    2016-01-01

    AIM: The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration-time curve (AUC) for metformin. METHODS: Metformin plasma concentrations (n = 627) at 0-24 h after a single 500 mg dose were used for LSS development, based on all su...

  8. The effects of disjunct sampling and averaging time on maximum mean wind speeds

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Mann, J.

    2006-01-01

    Conventionally, the 50-year wind is calculated on basis of the annual maxima of consecutive 10-min averages. Very often, however, the averages are saved with a temporal spacing of several hours. We call it disjunct sampling. It may also happen that the wind speeds are averaged over a longer time...

  9. Larger groups of passerines are more efficient problem solvers in the wild

    Science.gov (United States)

    Morand-Ferron, Julie; Quinn, John L.

    2011-01-01

    Group living commonly helps organisms face challenging environmental conditions. Although a known phenomenon in humans, recent findings suggest that a benefit of group living in animals generally might be increased innovative problem-solving efficiency. This benefit has never been demonstrated in a natural context, however, and the mechanisms underlying improved efficiency are largely unknown. We examined the problem-solving performance of great and blue tits at automated devices and found that efficiency increased with flock size. This relationship held when restricting the analysis to naive individuals, demonstrating that larger groups increased innovation efficiency. In addition to this effect of naive flock size, the presence of at least one experienced bird increased the frequency of solving, and larger flocks were more likely to contain experienced birds. These findings provide empirical evidence for the “pool of competence” hypothesis in nonhuman animals. The probability of success also differed consistently between individuals, a necessary condition for the pool of competence hypothesis. Solvers had a higher probability of success when foraging with a larger number of companions and when using devices located near rather than further from protective tree cover, suggesting a role for reduced predation risk on problem-solving efficiency. In contrast to traditional group living theory, individuals joining larger flocks benefited from a higher seed intake, suggesting that group living facilitated exploitation of a novel food source through improved problem-solving efficiency. Together our results suggest that both ecological and social factors, through reduced predation risk and increased pool of competence, mediate innovation in natural populations. PMID:21930936

  10. Time perspective in hereditary cancer: psychometric properties of a short form of the Zimbardo Time Perspective Inventory in a community and clinical sample.

    Science.gov (United States)

    Wakefield, Claire E; Homewood, Judi; Taylor, Alan; Mahmut, Mehmet; Meiser, Bettina

    2010-10-01

    We aimed to assess the psychometric properties of a 25-item short form of the Zimbardo Time Perspective Inventory in a community sample (N = 276) and in individuals with a strong family history of cancer, considering genetic testing for cancer risk (N = 338). In the community sample, individuals with high past-negative or present-fatalistic scores had higher levels of distress, as measured by depression, anxiety, and aggression. Similarly, in the patient sample, past-negative time perspective was positively correlated with distress, uncertainty, and postdecision regret when making a decision about genetic testing. Past-negative-oriented individuals were also more likely to be undecided about, or against, genetic testing. Hedonism was associated with being less likely to read the educational materials they received at their clinic, and fatalism was associated with having lower knowledge levels about genetic testing. The assessment of time perspective in individuals at increased risk of cancer can provide valuable clinical insights. However, further investigation of the psychometric properties of the short form of this scale is warranted, as it did not meet the currently accepted criteria for psychometric validation studies.

  11. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  12. Salmonella detection in poultry samples. Comparison of two commercial real-time PCR systems with culture methods for the detection of Salmonella spp. in environmental and fecal samples of poultry.

    Science.gov (United States)

    Sommer, D; Enderlein, D; Antakli, A; Schönenbrücher, H; Slaghuis, J; Redmann, T; Lierz, M

    2012-01-01

    The efficiency of two commercial PCR methods based on real-time technology, the foodproof® Salmonella detection system and the BAX® PCR Assay Salmonella system was compared to standardized culture methods (EN ISO 6579:2002 - Annex D) for the detection of Salmonella spp. in poultry samples. Four sample matrices (feed, dust, boot swabs, feces) obtained directly from poultry flocks, as well as artificially spiked samples of the same matrices, were used. All samples were tested for Salmonella spp. using culture methods first as the gold standard. In addition samples spiked with Salmonella Enteridis were tested to evaluate the sensitivity of both PCR methods. Furthermore all methods were evaluated in an annual ring-trial of the National Salmonella Reference Laboratory of Germany. Salmonella detection in the matrices feed, dust and boot swabs were comparable in both PCR systems whereas the results from feces differed markedly. The quality, especially the freshness, of the fecal samples had an influence on the sensitivity of the real-time PCR and the results of the culture methods. In fresh fecal samples an initial spiking level of 100cfu/25g Salmonella Enteritidis was detected. Two-days-dried fecal samples allowed the detection of 14cfu/25g. Both real- time PCR protocols appear to be suitable for the detection of Salmonella spp. in all four matrices. The foodproof® system detected eight samples more to be positive compared to the BAX® system, but had a potential false positive result in one case. In 7-days-dried samples none of the methods was able to detect Salmonella likely through letal cell damage. In general the advantage of PCR analyses over the culture method is the reduction of working time from 4-5 days to only 2 days. However, especially for the analysis of fecal samples official validation should be conducted according to the requirement of EN ISO6579:2002 - Annex D.

  13. Extracting Hydrologic Understanding from the Unique Space-time Sampling of the Surface Water and Ocean Topography (SWOT) Mission

    Science.gov (United States)

    Nickles, C.; Zhao, Y.; Beighley, E.; Durand, M. T.; David, C. H.; Lee, H.

    2017-12-01

    The Surface Water and Ocean Topography (SWOT) satellite mission is jointly developed by NASA, the French space agency (CNES), with participation from the Canadian and UK space agencies to serve both the hydrology and oceanography communities. The SWOT mission will sample global surface water extents and elevations (lakes/reservoirs, rivers, estuaries, oceans, sea and land ice) at a finer spatial resolution than is currently possible enabling hydrologic discovery, model advancements and new applications that are not currently possible or likely even conceivable. Although the mission will provide global cover, analysis and interpolation of the data generated from the irregular space/time sampling represents a significant challenge. In this study, we explore the applicability of the unique space/time sampling for understanding river discharge dynamics throughout the Ohio River Basin. River network topology, SWOT sampling (i.e., orbit and identified SWOT river reaches) and spatial interpolation concepts are used to quantify the fraction of effective sampling of river reaches each day of the three-year mission. Streamflow statistics for SWOT generated river discharge time series are compared to continuous daily river discharge series. Relationships are presented to transform SWOT generated streamflow statistics to equivalent continuous daily discharge time series statistics intended to support hydrologic applications using low-flow and annual flow duration statistics.

  14. Research on test of product based on spatial sampling criteria and variable step sampling mechanism

    Science.gov (United States)

    Li, Ruihong; Han, Yueping

    2014-09-01

    This paper presents an effective approach for online testing the assembly structures inside products using multiple views technique and X-ray digital radiography system based on spatial sampling criteria and variable step sampling mechanism. Although there are some objects inside one product to be tested, there must be a maximal rotary step for an object within which the least structural size to be tested is predictable. In offline learning process, Rotating the object by the step and imaging it and so on until a complete cycle is completed, an image sequence is obtained that includes the full structural information for recognition. The maximal rotary step is restricted by the least structural size and the inherent resolution of the imaging system. During online inspection process, the program firstly finds the optimum solutions to all different target parts in the standard sequence, i.e., finds their exact angles in one cycle. Aiming at the issue of most sizes of other targets in product are larger than that of the least structure, the paper adopts variable step-size sampling mechanism to rotate the product specific angles with different steps according to different objects inside the product and match. Experimental results show that the variable step-size method can greatly save time compared with the traditional fixed-step inspection method while the recognition accuracy is guaranteed.

  15. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 2: Extension to time-frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.

  16. Symbol synchronization and sampling frequency synchronization techniques in real-time DDO-OFDM systems

    Science.gov (United States)

    Chen, Ming; He, Jing; Cao, Zizheng; Tang, Jin; Chen, Lin; Wu, Xian

    2014-09-01

    In this paper, we propose and experimentally demonstrate a symbol synchronization and sampling frequency synchronization techniques in real-time direct-detection optical orthogonal frequency division multiplexing (DDO-OFDM) system, over 100-km standard single mode fiber (SSMF) using a cost-effective directly modulated distributed feedback (DFB) laser. The experiment results show that the proposed symbol synchronization based on training sequence (TS) has a low complexity and high accuracy even at a sampling frequency offset (SFO) of 5000-ppm. Meanwhile, the proposed pilot-assisted sampling frequency synchronization between digital-to-analog converter (DAC) and analog-to-digital converter (ADC) is capable of estimating SFOs with an accuracy of technique can also compensate SFO effects within a small residual SFO caused by deviation of SFO estimation and low-precision or unstable clock source. The two synchronization techniques are suitable for high-speed DDO-OFDM transmission systems.

  17. Development and Validation of a Real-Time PCR Assay for Rapid Detection of Candida auris from Surveillance Samples.

    Science.gov (United States)

    Leach, L; Zhu, Y; Chaturvedi, S

    2018-02-01

    Candida auris is an emerging multidrug-resistant yeast causing invasive health care-associated infection with high mortality worldwide. Rapid identification of C. auris is of primary importance for the implementation of public health measures to control the spread of infection. To achieve these goals, we developed and validated a TaqMan-based real-time PCR assay targeting the internal transcribed spacer 2 ( ITS 2) region of the ribosomal gene. The assay was highly specific, reproducible, and sensitive, with the detection limit of 1 C. auris CFU/PCR. The performance of the C. auris real-time PCR assay was evaluated by using 623 surveillance samples, including 365 patient swabs and 258 environmental sponges. Real-time PCR yielded positive results from 49 swab and 58 sponge samples, with 89% and 100% clinical sensitivity with regard to their respective culture-positive results. The real-time PCR also detected C. auris DNA from 1% and 12% of swab and sponge samples with culture-negative results, indicating the presence of dead or culture-impaired C. auris The real-time PCR yielded results within 4 h of sample processing, compared to 4 to 14 days for culture, reducing turnaround time significantly. The new real-time PCR assay allows for accurate and rapid screening of C. auris and can increase effective control and prevention of this emerging multidrug-resistant fungal pathogen in health care facilities. Copyright © 2018 Leach et al.

  18. In-situ high resolution particle sampling by large time sequence inertial spectrometry

    International Nuclear Information System (INIS)

    Prodi, V.; Belosi, F.

    1990-09-01

    In situ sampling is always preferred, when possible, because of the artifacts that can arise when the aerosol has to flow through long sampling lines. On the other hand, the amount of possible losses can be calculated with some confidence only when the size distribution can be measured with a sufficient precision and the losses are not too large. This makes it desirable to sample directly in the vicinity of the aerosol source or containment. High temperature sampling devices with a detailed aerodynamic separation are extremely useful to this purpose. Several measurements are possible with the inertial spectrometer (INSPEC), but not with cascade impactors or cyclones. INSPEC - INertial SPECtrometer - has been conceived to measure the size distribution of aerosols by separating the particles while airborne according to their size and collecting them on a filter. It consists of a channel of rectangular cross-section with a 90 degree bend. Clean air is drawn through the channel, with a thin aerosol sheath injected close to the inner wall. Due to the bend, the particles are separated according to their size, leaving the original streamline by a distance which is a function of particle inertia and resistance, i.e. of aerodynamic diameter. The filter collects all the particles of the same aerodynamic size at the same distance from the inlet, in a continuous distribution. INSPEC particle separation at high temperature (up to 800 C) has been tested with Zirconia particles as calibration aerosols. The feasibility study has been concerned with resolution and time sequence sampling capabilities under high temperature (700 C)

  19. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  20. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  1. Implementing BosonSampling with time-bin encoding: Analysis of loss, mode mismatch, and time jitter

    Science.gov (United States)

    Motes, Keith R.; Dowling, Jonathan P.; Gilchrist, Alexei; Rohde, Peter P.

    2015-11-01

    It was recently shown by Motes, Gilchrist, Dowling, and Rohde [Phys. Rev. Lett. 113, 120501 (2014), 10.1103/PhysRevLett.113.120501] that a time-bin encoded fiber-loop architecture can implement an arbitrary passive linear optics transformation. This was shown in the case of an ideal scheme whereby the architecture has no sources of error. In any realistic implementation, however, physical errors are present, which corrupt the output of the transformation. We investigate the dominant sources of error in this architecture—loss and mode mismatch—and consider how it affects the BosonSampling protocol, a key application for passive linear optics. For our loss analysis we consider two major components that contribute to loss—fiber and switches—and calculate how this affects the success probability and fidelity of the device. Interestingly, we find that errors due to loss are not uniform (unique to time-bin encoding), which asymmetrically biases the implemented unitary. Thus loss necessarily limits the class of unitaries that may be implemented, and therefore future implementations must prioritize minimizing loss rates if arbitrary unitaries are to be implemented. Our formalism for mode mismatch is generalized to account for various phenomenon that may cause mode mismatch, but we focus on two—errors in fiber-loop lengths and time jitter of the photon source. These results provide a guideline for how well future experimental implementations might perform in light of these error mechanisms.

  2. Advancement of Solidification Processing Technology Through Real Time X-Ray Transmission Microscopy: Sample Preparation

    Science.gov (United States)

    Stefanescu, D. M.; Curreri, P. A.

    1996-01-01

    Two types of samples were prepared for the real time X-ray transmission microscopy (XTM) characterization. In the first series directional solidification experiments were carried out to evaluate the critical velocity of engulfment of zirconia particles in the Al and Al-Ni eutectic matrix under ground (l-g) conditions. The particle distribution in the samples was recorded on video before and after the samples were directionally solidified. In the second series samples of the above two type of composites were prepared for directional solidification runs to be carried out on the Advanced Gradient Heating Facility (AGHF) aboard the space shuttle during the LMS mission in June 1996. X-ray microscopy proved to be an invaluable tool for characterizing the particle distribution in the metal matrix samples. This kind of analysis helped in determining accurately the critical velocity of engulfment of ceramic particles by the melt interface in the opaque metal matrix composites. The quality of the cast samples with respect to porosity and instrumented thermocouple sheath breakage or shift could be easily viewed and thus helped in selecting samples for the space shuttle experiments. Summarizing the merits of this technique it can be stated that this technique enabled the use of cast metal matrix composite samples since the particle location was known prior to the experiment.

  3. Larger error signals in major depression are associated with better avoidance learning

    Directory of Open Access Journals (Sweden)

    James F eCavanagh

    2011-11-01

    Full Text Available The medial prefrontal cortex (mPFC is particularly reactive to signals of error, punishment, and conflict in the service of behavioral adaptation and it is consistently implicated in the etiology of Major Depressive Disorder (MDD. This association makes conceptual sense, given that MDD has been associated with hyper-reactivity in neural systems associated with punishment processing. Yet in practice, depression-related variance in measures of mPFC functioning often fails to relate to performance. For example, neuroelectric reflections of mediofrontal error signals are often found to be larger in MDD, but a deficit in post-error performance suggests that these error signals are not being used to rapidly adapt behavior. Thus, it remains unknown if depression-related variance in error signals reflects a meaningful alteration in the use of error or punishment information. However, larger mediofrontal error signals have also been related to another behavioral tendency: increased accuracy in avoidance learning. The integrity of this error-avoidance system remains untested in MDD. In this study, EEG was recorded as 21 symptomatic, drug-free participants with current or past MDD and 24 control participants performed a probabilistic reinforcement learning task. Depressed participants had larger mPFC EEG responses to error feedback than controls. The direct relationship between error signal amplitudes and avoidance learning accuracy was replicated. Crucially, this relationship was stronger in depressed participants for high conflict lose-lose situations, demonstrating a selective alteration of avoidance learning. This investigation provided evidence that larger error signal amplitudes in depression are associated with increased avoidance learning, identifying a candidate mechanistic model for hypersensitivity to negative outcomes in depression.

  4. Exponential synchronization of chaotic Lur'e systems with time-varying delay via sampled-data control

    International Nuclear Information System (INIS)

    Rakkiyappan, R.; Sivasamy, R.; Lakshmanan, S.

    2014-01-01

    In this paper, we study the exponential synchronization of chaotic Lur'e systems with time-varying delays via sampled-data control by using sector nonlinearties. In order to make full use of information about sampling intervals and interval time-varying delays, new Lyapunov—Krasovskii functionals with triple integral terms are introduced. Based on the convex combination technique, two kinds of synchronization criteria are derived in terms of linear matrix inequalities, which can be efficiently solved via standard numerical software. Finally, three numerical examples are provided to demonstrate the less conservatism and effectiveness of the proposed results

  5. Smoking Topography among Korean Smokers: Intensive Smoking Behavior with Larger Puff Volume and Shorter Interpuff Interval.

    Science.gov (United States)

    Kim, Sungroul; Yu, Sol

    2018-05-18

    The difference of smoker's topography has been found to be a function many factors, including sex, personality, nicotine yield, cigarette type (i.e., flavored versus non-flavored) and ethnicity. We evaluated the puffing behaviors of Korean smokers and its association with smoking-related biomarker levels. A sample of 300 participants was randomly recruited from metropolitan areas in South Korea. Topography measures during a 24-hour period were obtained using a CReSS pocket device. Korean male smokers smoked two puffs less per cigarette compared to female smokers (15.0 (13.0⁻19.0) vs. 17.5 (15.0⁻21.0) as the median (Interquartile range)), but had a significantly larger puff volume (62.7 (52.7⁻75.5) mL vs. 53.5 (42.0⁻64.2) mL); p = 0.012). The interpuff interval was similar between men and women (8.9 (6.5⁻11.2) s vs. 8.3 (6.2⁻11.0) s; p = 0.122) but much shorter than other study results. A dose-response association ( p = 0.0011) was observed between daily total puff volumes and urinary cotinine concentrations, after controlling for sex, age, household income level and nicotine addiction level. An understanding of the difference of topography measures, particularly the larger puff volume and shorter interpuff interval of Korean smokers, may help to overcome a potential underestimation of internal doses of hazardous byproducts of smoking.

  6. Application of chemical mutagens and radiation in breeding buckwheat for larger seeds

    International Nuclear Information System (INIS)

    Alekseeva, E.S.

    1988-01-01

    Full text: In 1974, seeds of the Viktoriya variety of buckwheat were treated with 20-30 krad gamma radiation and chemical mutagens in the Biophysics Department of the Kishinev Agricultural Institute. For the chemical mutagen treatment, we used N-ethylnitroso-urea NEH (0.025 and 0.012%), N-methylnitroso-urea NMH (0.01 and 0.005%), ethylenimine EI (0.01 and 0.005%), dimethyl sulphate DMS (0.01 and 0.005%) and 1.4-bis-diazoacetyl butane DAB (0.01 and 0.05%). Since some investigators think that different results are produced by changing the order of the treatment, we treated seeds with chemical mutagens before and after irradiation and this was followed by drying. A total of 2400 seeds were treated. Selection started with M 2 seeds produced by M 1 plants. The thousand seed weight of the best ones ranged from 40.7 to 47.8 g, which was 11.9-18.7 g heavier than the control. The large seed size thus selected was heritable. Since larger seeds are very important for the creation of high yielding varieties buckwheat, only families with these characteristics were selected for further work. We observed even some further increase in seed weight in the next generation. It was observed that when planting large seeds, after six days of growth the cotyledons were significantly larger than in the control plants. This characteristic was used in selecting for a high yielding large-seed variety of buckwheat. The plants were selected twice: once for development of large cotyledon leaves and the second time for plant yield. In the fourth generation, the families thus obtained continued to be studied in greenhouse experiments and the same time be propagated under field conditions. The seeds of these families were then combined and under the name Podolyanka in 1976 were subjected to competitive variety testing. Following the competitive variety testing the mutant variety Podolyanka was released in 1984. It is high yielding (2950 kg/ha), has a short vegetation period (matures 17-18 days

  7. A 'smart' tube holder enables real-time sample monitoring in a standard lab centrifuge.

    Science.gov (United States)

    Hoang, Tony; Moskwa, Nicholas; Halvorsen, Ken

    2018-01-01

    The centrifuge is among the oldest and most widely used pieces of laboratory equipment, with significant applications that include clinical diagnostics and biomedical research. A major limitation of laboratory centrifuges is their "black box" nature, limiting sample observation to before and after centrifugation. Thus, optimized protocols require significant trial and error, while unoptimized protocols waste time by centrifuging longer than necessary or material due to incomplete sedimentation. Here, we developed an instrumented centrifuge tube receptacle compatible with several commercial benchtop centrifuges that can provide real-time sample analysis during centrifugation. We demonstrated the system by monitoring cell separations during centrifugation for different spin speeds, concentrations, buffers, cell types, and temperatures. We show that the collected data are valuable for analytical purposes (e.g. quality control), or as feedback to the user or the instrument. For the latter, we verified an adaptation where complete sedimentation turned off the centrifuge and notified the user by a text message. Our system adds new functionality to existing laboratory centrifuges, saving users time and providing useful feedback. This add-on potentially enables new analytical applications for an instrument that has remained largely unchanged for decades.

  8. Real-time photonic sampling with improved signal-to-noise and distortion ratio using polarization-dependent modulators

    Science.gov (United States)

    Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui

    2018-04-01

    A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.

  9. Usefulness of in-house real time PCR for HBV DNA quantification in serum and oral fluid samples.

    Science.gov (United States)

    Portilho, Moyra Machado; Mendonça, Ana Carolina da Fonseca; Bezerra, Cristianne Sousa; do Espirito-Santo, Márcia Paschoal; de Paula, Vanessa Salete; Nabuco, Leticia Cancella; Villela-Nogueira, Cristiane Alves; Lewis-Ximenez, Lia Laura; Lampe, Elisabeth; Villar, Livia Melo

    2018-06-01

    For quantification of hepatitis B virus DNA (HBV DNA), commercial assays are used with serum or plasma samples, but oral fluid samples could be an alternative for HBV diagnosis due to ease of collection. This study aims to develop in-house real time PCR using synthetic curve for HBV DNA quantification for serum and oral fluid samples. Samples were collected from 103 individuals (55 HBsAg reactive and HBV DNA reactive by commercial assay and 48 without HBV markers) and submitted to two in-house real time PCR assays for HBV pre-S/S region with different standard curves: qPCR plasmidial and qPCR synthetic. A total of 27 serum samples were HBV DNA positive by qPCR plasmidial and 40 with qPCR synthetic (72% and 85% of concordance, respectively). Quantitative PCR synthetic presented efficiency of 99% and sensitivity of 2log10 copies/mL. Among oral fluid samples, five and ten were detected using qPCR plasmidial and synthetic, respectively. This study demonstrated that qPCR synthetic using serum samples could be used as alternative for HBV DNA quantification due to its sensitivity. In addition, it was possible to quantify HBV DNA in oral fluid samples suggesting the potential of this specimen for molecular diagnosis of HBV. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Effect of sample storage time on detection of hybridization signals in Checkerboard DNA-DNA hybridization.

    Science.gov (United States)

    do Nascimento, Cássio; Muller, Katia; Sato, Sandra; Albuquerque Junior, Rubens Ferreira

    2012-04-01

    Long-term sample storage can affect the intensity of the hybridization signals provided by molecular diagnostic methods that use chemiluminescent detection. The aim of this study was to evaluate the effect of different storage times on the hybridization signals of 13 bacterial species detected by the Checkerboard DNA-DNA hybridization method using whole-genomic DNA probes. Ninety-six subgingival biofilm samples were collected from 36 healthy subjects, and the intensity of hybridization signals was evaluated at 4 different time periods: (1) immediately after collecting (n = 24) and (2) after storage at -20 °C for 6 months (n = 24), (3) for 12 months (n = 24), and (4) for 24 months (n = 24). The intensity of hybridization signals obtained from groups 1 and 2 were significantly higher than in the other groups (p  0.05). The Checkerboard DNA-DNA hybridization method was suitable to detect hybridization signals from all groups evaluated, and the intensity of signals decreased significantly after long periods of sample storage.

  11. Failure-censored accelerated life test sampling plans for Weibull distribution under expected test time constraint

    International Nuclear Information System (INIS)

    Bai, D.S.; Chun, Y.R.; Kim, J.G.

    1995-01-01

    This paper considers the design of life-test sampling plans based on failure-censored accelerated life tests. The lifetime distribution of products is assumed to be Weibull with a scale parameter that is a log linear function of a (possibly transformed) stress. Two levels of stress higher than the use condition stress, high and low, are used. Sampling plans with equal expected test times at high and low test stresses which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic used to decide lot acceptability are obtained. The properties of the proposed life-test sampling plans are investigated

  12. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  13. Bacterial communities of disease vectors sampled across time, space, and species.

    Science.gov (United States)

    Jones, Ryan T; Knight, Rob; Martin, Andrew P

    2010-02-01

    A common strategy of pathogenic bacteria is to form close associations with parasitic insects that feed on animals and to use these insects as vectors for their own transmission. Pathogens interact closely with other coexisting bacteria within the insect, and interactions between co-occurring bacteria may influence the vector competency of the parasite. Interactions between particular lineages can be explored through measures of alpha-diversity. Furthermore, general patterns of bacterial community assembly can be explored through measures of beta-diversity. Here, we use pyrosequencing (n=115,924 16S rRNA gene sequences) to describe the bacterial communities of 230 prairie dog fleas sampled across space and time. We use these communinty characterizations to assess interactions between dominant community members and to explore general patterns of bacterial community assembly in fleas. An analysis of co-occurrence patterns suggests non-neutral negative interactions between dominant community members (Pspace (phylotype-based: R=0.418, Pspace and time.

  14. Larger ATV engine size correlates with an increased rate of traumatic brain injury.

    Science.gov (United States)

    Butts, C Caleb; Rostas, Jack W; Lee, Y L; Gonzalez, Richard P; Brevard, Sidney B; Frotan, M Amin; Ahmed, Naveed; Simmons, Jon D

    2015-04-01

    Since the introduction of all-terrain vehicles (ATV) to the United States in 1971, injuries and mortalities related to their use have increased significantly. Furthermore, these vehicles have become larger and more powerful. As there are no helmet requirements or limitations on engine-size in the State of Alabama, we hypothesised that larger engine size would correlate with an increased incidence of traumatic brain injury (TBI) in patients following an ATV crash. Patient and ATV data were prospectively collected on all ATV crashes presenting to a level one trauma centre from September 2010 to May 2013. Collected data included: demographics, age of driver, ATV engine size, presence of helmet, injuries, and outcomes. The data were grouped according to the ATV engine size in cubic centimetres (cc). For the purposes of this study, TBI was defined as any type of intracranial haemorrhage on the initial computed tomography scan. There were 61 patients identified during the study period. Two patients (3%) were wearing a helmet at the time of injury. Patients on an ATV with an engine size of 350 cc or greater had higher Injury Severity Scores (13.9 vs. 7.5, p ≤ 0.05) and an increased incidence of TBI (26% vs. 0%, p ≤ 0.05) when compared to patients on ATV's with an engine size less than 350 cc. Patients on an ATV with an engine size of 350 cc or greater were more likely to have a TBI. The use of a helmet was rarely present in this cohort. Legislative efforts to implement rider protection laws for ATVs are warranted. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling.

    Science.gov (United States)

    Lu, Wenlian; Zheng, Ren; Chen, Tianping

    2016-03-01

    In this paper, we discuss outer-synchronization of the asymmetrically connected recurrent time-varying neural networks. By using both centralized and decentralized discretization data sampling principles, we derive several sufficient conditions based on three vector norms to guarantee that the difference of any two trajectories starting from different initial values of the neural network converges to zero. The lower bounds of the common time intervals between data samples in centralized and decentralized principles are proved to be positive, which guarantees exclusion of Zeno behavior. A numerical example is provided to illustrate the efficiency of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Accuracy and Effort of Interpolation and Sampling: Can GIS Help Lower Field Costs?

    Directory of Open Access Journals (Sweden)

    Greg Simpson

    2014-12-01

    Full Text Available Sedimentation is a problem for all reservoirs in the Black Hills of South Dakota. Before working on sediment removal, a survey on the extent and distribution of the sediment is needed. Two sample lakes were used to determine which of three interpolation methods gave the most accurate volume results. A secondary goal was to see if fewer samples could be taken while still providing similar results. The smaller samples would mean less field time and thus lower costs. Subsamples of 50%, 33% and 25% were taken from the total samples and evaluated for the lowest Root Mean Squared Error values. Throughout the trials, the larger sample sizes generally showed better accuracy than smaller samples. Graphing the sediment volume estimates of the full sample, 50%, 33% and 25% showed little improvement after a sample of approximately 40%–50% when comparing the asymptote of the separate samples. When we used smaller subsamples the predicted sediment volumes were normally greater than the full sample volumes. It is suggested that when planning future sediment surveys, workers plan on gathering data at approximately every 5.21 meters. These sample sizes can be cut in half and still retain relative accuracy if time savings are needed. Volume estimates may slightly suffer with these reduced samples sizes, but the field work savings can be of benefit. Results from these surveys are used in prioritization of available funds for reclamation efforts.

  17. Stereotactic Radiosurgery with Neoadjuvant Embolization of Larger Arteriovenous Malformations: An Institutional Experience

    Directory of Open Access Journals (Sweden)

    Richard Dalyai

    2014-01-01

    Full Text Available Objective. This study investigates the safety and efficacy of a multimodality approach combining staged endovascular embolizations with subsequent SRS for the management of larger AVMs. Methods. Ninety-five patients with larger AVMs were treated with staged endovascular embolization followed by SRS between 1996 and 2011. Results. The median volume of AVM in this series was 28 cm3 and 47 patients (48% were Spetzler-Martin grade IV or V. Twenty-seven patients initially presented with hemorrhage. Sixty-one patients underwent multiple embolizations while a single SRS session was performed in 64 patients. The median follow-up after SRS session was 32 months (range 9–136 months. Overall procedural complications occurred in 14 patients. There were 13 minor neurologic complications and 1 major complication (due to embolization while four patients had posttreatment hemorrhage. Thirty-eight patients (40% were cured radiographically. The postradiosurgery actuarial rate of obliteration was 45% at 5 years, 56% at 7 years, and 63% at 10 years. In multivariate analysis, larger AVM size, deep venous drainage, and the increasing number of embolization/SRS sessions were negative predictors of obliteration. The number of embolizations correlated positively with the number of stereotactic radiosurgeries (P<0.005. Conclusions. Multimodality endovascular and radiosurgical approach is an efficacious treatment strategy for large AVM.

  18. The arrival time distribution of muons in extensive air showers

    International Nuclear Information System (INIS)

    Van der Walt, D.J.

    1984-01-01

    An experiment was done to investigate the lateral dependence of the muon arrival time distribution in extensive air showers at small core distances. In the present experiment the muon arrival time distribution was investigated by measuring the relative arrival times between single muons in five fast Cerenkov detectors beneath 500g/cm 2 of concrete and at an atmospheric depth of 880g/cm 2 . It is shown that, although it is not possible to determine the arrival time distribution as such, it is possible to interpret the relative arrival times between muons in terms of the differences between the order statistics of a sample drawn from the arrival time distribution. The relationship between the arrival time distribution of muons relative to the first detected muon and the muon arrival time distribution is also derived. It was found that the dispersion of the muon arrival time distribution does not increase significantly with increasing core distance between 10m and 60m from the core. A comparison with theoretical distributions obtained from model calculations for proton initiated showers indicate that 1. the mean delay of muons with respect to the first detected muon is significantly larger than that expected from the model and 2. the observed dispersion is also significantly larger than the predicted dispersion for core distances between 10m and 60m

  19. More ‘altruistic’ punishment in larger societies

    Science.gov (United States)

    Marlowe, Frank W; Berbesque, J. Colette; Barr, Abigail; Barrett, Clark; Bolyanatz, Alexander; Cardenas, Juan Camilo; Ensminger, Jean; Gurven, Michael; Gwako, Edwins; Henrich, Joseph; Henrich, Natalie; Lesorogol, Carolyn; McElreath, Richard; Tracer, David

    2007-01-01

    If individuals will cooperate with cooperators, and punish non-cooperators even at a cost to themselves, then this strong reciprocity could minimize the cheating that undermines cooperation. Based upon numerous economic experiments, some have proposed that human cooperation is explained by strong reciprocity and norm enforcement. Second-party punishment is when you punish someone who defected on you; third-party punishment is when you punish someone who defected on someone else. Third-party punishment is an effective way to enforce the norms of strong reciprocity and promote cooperation. Here we present new results that expand on a previous report from a large cross-cultural project. This project has already shown that there is considerable cross-cultural variation in punishment and cooperation. Here we test the hypothesis that population size (and complexity) predicts the level of third-party punishment. Our results show that people in larger, more complex societies engage in significantly more third-party punishment than people in small-scale societies. PMID:18089534

  20. Status of timing with plastic scintillation detectors

    International Nuclear Information System (INIS)

    Moszynski, M.; Bengtson, B.

    1979-01-01

    Timing properties of scintillators and photomultipliers as well as theoretical and experimental studies of time resolution of scintillation counters are reviewed. Predictions of the theory of the scintillation pulse generation processes are compared with the data on the light pulse shape from small samples, in which the light pulse shape depends only on the composition of the scintillator. For larger samples the influence of the light collection process and the self-absorption process on the light pulse shape are discussed. The data on rise times, fwhm's, decay times and light yield of several commercial scintillators used in timing are collected. The next part of the paper deals with the properties of photomultipliers. The sources of time uncertainties in photomultipliers as a spread of the initial velocity of photoelectrons, emission of photoelectrons under different angles and from different points at the photocathode, the time spread and the gain dispersion introduced by electron photomultiplier are reviewed. The experimental data on the time jitter, single electron response and photoelectron yield of some fast photomultipliers are collected. As the time resolution of the timing systems with scintillation counters depends also on time pick-off units, a short presentation of the timing methods is given. The discussion of timing theories is followed by a review of experimental studies of the time resolution of scintillation counters. The paper is ended by an analysis of prospects on further progress of the subnanosecond timing with scintillation counters. (Auth.)

  1. How do environmental policies fit within larger strategic planning processes

    OpenAIRE

    Crowe, Lynn

    2015-01-01

    This chapter explores how environmental policies fit within larger strategic processes relevant to sport management and development. It identifies key policy areas such as environmental impact assessment, sustainable land use planning, environmental protection and visitor impact management. Good practice and guidelines which will enable sport managers to integrate their work with these environmental policies are explored. Detailed guidance on design and longer term management and maintenance ...

  2. HCV-RNA quantification in liver bioptic samples and extrahepatic compartments, using the abbott RealTime HCV assay.

    Science.gov (United States)

    Antonucci, FrancescoPaolo; Cento, Valeria; Sorbo, Maria Chiara; Manuelli, Matteo Ciancio; Lenci, Ilaria; Sforza, Daniele; Di Carlo, Domenico; Milana, Martina; Manzia, Tommaso Maria; Angelico, Mario; Tisone, Giuseppe; Perno, Carlo Federico; Ceccherini-Silberstein, Francesca

    2017-08-01

    We evaluated the performance of a rapid method to quantify HCV-RNA in the hepatic and extrahepatic compartments, by using for the first time the Abbott RealTime HCV-assay. Non-tumoral (NT), tumoral (TT) liver samples, lymph nodes and ascitic fluid from patients undergoing orthotopic-liver-transplantation (N=18) or liver resection (N=4) were used for the HCV-RNA quantification; 5/22 patients were tested after or during direct acting antivirals (DAA) treatment. Total RNA and DNA quantification from tissue-biopsies allowed normalization of HCV-RNA concentrations in IU/μg of total RNA and IU/10 6 liver-cells, respectively. HCV-RNA was successfully quantified with high reliability in liver biopsies, lymph nodes and ascitic fluid samples. Among the 17 untreated patients, a positive and significant HCV-RNA correlation between serum and NT liver-samples was observed (Pearson: rho=0.544, p=0.024). Three DAA-treated patients were HCV-RNA "undetectable" in serum, but still "detectable" in all tested liver-tissues. Differently, only one DAA-treated patient, tested after sustained-virological-response, showed HCV-RNA "undetectability" in liver-tissue. HCV-RNA was successfully quantified with high reliability in liver bioptic samples and extrahepatic compartments, even when HCV-RNA was "undetectable" in serum. Abbott RealTime HCV-assay is a good diagnostic tool for HCV quantification in intra- and extra-hepatic compartments, whenever a bioptic sample is available. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Great tits provided with ad libitum food lay larger eggs when exposed to colder temperatures

    NARCIS (Netherlands)

    Schaper, S.V.; Visser, M.E.

    2013-01-01

    The amount of nutrients deposited into a bird egg varies both between and within clutches of the same female. Larger eggs enhance offspring traits, but as a tradeoff, laying large eggs also infers energetic costs to the female. Income breeders usually lay larger eggs later in the season, when

  4. Stability of purgeable VOCs in water samples during pre-analytical holding: Part 1, Analysis by a commercial laboratory

    Energy Technology Data Exchange (ETDEWEB)

    West, O.R.; Bayne, C.K.; Siegrist, R.L.; Holden, W.L.; Scarborough, S.S. [Oak Ridge National Lab., TN (United States); Bottrell, D.W. [USDOE, Washington, DC (United States)

    1996-10-01

    This study was undertaken to examine the hypothesis that prevalent and priority purgeable VOCs in properly preserved water samples are stable for at least 28 days. (VOCs are considered stable if concentrations do not change by more than 10%.) Surface water was spiked with 44 purgeable VOCs. Results showed that the measurement of 35 out of 44 purgeable VOCs in properly preserved water samples (4 C, 250 mg NaHSO{sub 4}, no headspace in 40 mL VOC vials with 0.010-in. Teflon-lined silicone septum caps) will not be affected by sample storage for 28 days. Larger changes (>10%) and low practical reporting times were observed for a few analytes, e.g. acrolein, CS{sub 2}, vinyl acetate, etc.; these also involve other analytical problems. Advantages of a 28-day (compared to 14-day) holding time are pointed out.

  5. APTIMA assay on SurePath liquid-based cervical samples compared to endocervical swab samples facilitated by a real time database

    Directory of Open Access Journals (Sweden)

    Khader Samer

    2010-01-01

    samples transferred to APTIMA specimen transfer medium within seven days is sufficiently sensitive and specific to be used to screen for CT and GC. CT sensitivity may be somewhat reduced in samples from patients over 25 years. SP specimens retained in the original SP fixative for longer time intervals also may have decreased sensitivity, due to deterioration of RNA, but this was not assessed in this study. The ability to tap the live pathology database is a valuable tool that can useful to conduct clinical studies without a costly prospective clinical trial.

  6. Comparison of Spot and Time Weighted Averaging (TWA Sampling with SPME-GC/MS Methods for Trihalomethane (THM Analysis

    Directory of Open Access Journals (Sweden)

    Don-Roger Parkinson

    2016-02-01

    Full Text Available Water samples were collected and analyzed for conductivity, pH, temperature and trihalomethanes (THMs during the fall of 2014 at two monitored municipal drinking water source ponds. Both spot (or grab and time weighted average (TWA sampling methods were assessed over the same two day sampling time period. For spot sampling, replicate samples were taken at each site and analyzed within 12 h of sampling by both Headspace (HS- and direct (DI- solid phase microextraction (SPME sampling/extraction methods followed by Gas Chromatography/Mass Spectrometry (GC/MS. For TWA, a two day passive on-site TWA sampling was carried out at the same sampling points in the ponds. All SPME sampling methods undertaken used a 65-µm PDMS/DVB SPME fiber, which was found optimal for THM sampling. Sampling conditions were optimized in the laboratory using calibration standards of chloroform, bromoform, bromodichloromethane, dibromochloromethane, 1,2-dibromoethane and 1,2-dichloroethane, prepared in aqueous solutions from analytical grade samples. Calibration curves for all methods with R2 values ranging from 0.985–0.998 (N = 5 over the quantitation linear range of 3–800 ppb were achieved. The different sampling methods were compared for quantification of the water samples, and results showed that DI- and TWA- sampling methods gave better data and analytical metrics. Addition of 10% wt./vol. of (NH42SO4 salt to the sampling vial was found to aid extraction of THMs by increasing GC peaks areas by about 10%, which resulted in lower detection limits for all techniques studied. However, for on-site TWA analysis of THMs in natural waters, the calibration standard(s ionic strength conditions, must be carefully matched to natural water conditions to properly quantitate THM concentrations. The data obtained from the TWA method may better reflect actual natural water conditions.

  7. Sampling soils for 137Cs using various field-sampling volumes

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Schofield, T.G.; White, G.C.; Trujillo, G.

    1981-10-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from intensive study area in the fallout pathway of Trinity were sampled for 137 Cs using 25-, 500-, 2500-, and 12 500-cm 3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137 Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137 Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, where CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137 Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2 to 4 aliquots out of an many as 30 collected need be assayed for 137 Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137 Cs concentration decreased dramatically, but decreased very little with additional labor

  8. Microdochium nivale and Microdochium majus in seed samples of Danish small grain cereals

    DEFF Research Database (Denmark)

    Nielsen, L. K.; Justesen, A. F.; Jensen, J. D.

    2013-01-01

    Microdochium nivale and Microdochium majus are two of fungal species found in the Fusarium Head Blight (FHB) complex infecting small grain cereals. Quantitative real-time PCR assays were designed to separate the two Microdochium species based on the translation elongation factor 1a gene (TEF-1a......) and used to analyse a total of 374 seed samples of wheat, barley, triticale, rye and oat sampled from farmers’ fields across Denmark from 2003 to 2007. Both fungal species were detected in the five cereal species but M. majus showed a higher prevalence compared to M. nivale in most years in all cereal...... species except rye, in which M. nivale represented a larger proportion of the biomass and was more prevalent than M. majus in some samples. Historical samples of wheat and barley from 1957 to 2000 similarly showed a strong prevalence of M. majus over M. nivale indicating that M. majus has been the main...

  9. Speaker Input Variability Does Not Explain Why Larger Populations Have Simpler Languages.

    Science.gov (United States)

    Atkinson, Mark; Kirby, Simon; Smith, Kenny

    2015-01-01

    A learner's linguistic input is more variable if it comes from a greater number of speakers. Higher speaker input variability has been shown to facilitate the acquisition of phonemic boundaries, since data drawn from multiple speakers provides more information about the distribution of phonemes in a speech community. It has also been proposed that speaker input variability may have a systematic influence on individual-level learning of morphology, which can in turn influence the group-level characteristics of a language. Languages spoken by larger groups of people have less complex morphology than those spoken in smaller communities. While a mechanism by which the number of speakers could have such an effect is yet to be convincingly identified, differences in speaker input variability, which is thought to be larger in larger groups, may provide an explanation. By hindering the acquisition, and hence faithful cross-generational transfer, of complex morphology, higher speaker input variability may result in structural simplification. We assess this claim in two experiments which investigate the effect of such variability on language learning, considering its influence on a learner's ability to segment a continuous speech stream and acquire a morphologically complex miniature language. We ultimately find no evidence to support the proposal that speaker input variability influences language learning and so cannot support the hypothesis that it explains how population size determines the structural properties of language.

  10. Larger amygdala volume in first-degree relatives of patients with major depression

    Directory of Open Access Journals (Sweden)

    Nina Romanczuk-Seiferth

    2014-01-01

    Conclusions: Larger gray matter volume in healthy relatives of MDD patients point to a possible vulnerability mechanism in MDD etiology and therefore extend knowledge in the field of high-risk approaches in MDD.

  11. Detection of Strongylus vulgaris in equine faecal samples by real-time PCR and larval culture - method comparison and occurrence assessment.

    Science.gov (United States)

    Kaspar, A; Pfister, K; Nielsen, M K; Silaghi, C; Fink, H; Scheuerle, M C

    2017-01-11

    Strongylus vulgaris has become a rare parasite in Germany during the past 50 years due to the practice of frequent prophylactic anthelmintic therapy. To date, the emerging development of resistance in Cyathostominae and Parascaris spp. to numerous equine anthelmintics has changed deworming management and the frequency of anthelmintic usage. In this regard, reliable detection of parasitic infections, especially of the highly pathogenic S. vulgaris is essential. In the current study, two diagnostic methods for the detection of infections with S. vulgaris were compared and information on the occurrence of this parasite in German horses was gained. For this purpose, faecal samples of 501 horses were screened for S. vulgaris with real-time PCR and an additional larval culture was performed in samples of 278 horses. A subset of 26 horses underwent multiple follow-up examinations with both methods in order to evaluate both the persistence of S. vulgaris infections and the reproducibility of each diagnostic method. The real-time PCR revealed S. vulgaris-DNA in ten of 501 investigated equine samples (1.9%). The larval culture demonstrated larvae of S. vulgaris in three of the 278 samples (1.1%). A direct comparison of the two methods was possible in 321 samples including 43 follow-up examinations with the result of 11 S. vulgaris-positive samples by real-time PCR and 4 S. vulgaris-positive samples by larval culture. The McNemar's test (p-value = 0.016) revealed a significant difference and the kappa values (0.525) showed a moderate agreement between real-time PCR and larval culture. The real-time PCR detected a significantly higher proportion of positives of S. vulgaris compared to larval culture and should thus be considered as a routine diagnostic method for the detection of S. vulgaris in equine samples.

  12. Decomposition of silicate sample by fusion with potassium hydroxide and potassium nitrate

    International Nuclear Information System (INIS)

    Yang Tongzai; Wang Xiaolin; Liu Yinong; Chen Yinliang; Sun Ying; Li Yuqian

    1995-01-01

    The decomposition method of silicate sample by fusion with KOH and KNO 3 recounted. The decomposed sample can be used to separate and purify the rare earth nuclides. The advantage of this method is that it can decompose larger amount of sample under lower decomposition temperature

  13. Size selectivity of commercial (300 MC) and larger square mesh top ...

    African Journals Online (AJOL)

    In the present study, size selectivity of a commercial (300 MC) and a larger square mesh top panel (LSMTPC) codend for blue whiting (Micromesistius poutassou) were tested on a commercial trawl net in the international waters between Turkey and Greece. Trawling, performed during daylight was carried out at depths ...

  14. Extending laboratory automation to the wards: effect of an innovative pneumatic tube system on diagnostic samples and transport time.

    Science.gov (United States)

    Suchsland, Juliane; Winter, Theresa; Greiser, Anne; Streichert, Thomas; Otto, Benjamin; Mayerle, Julia; Runge, Sören; Kallner, Anders; Nauck, Matthias; Petersmann, Astrid

    2017-02-01

    The innovative pneumatic tube system (iPTS) transports one sample at a time without the use of cartridges and allows rapid sending of samples directly into the bulk loader of a laboratory automation system (LAS). We investigated effects of the iPTS on samples and turn-around time (TAT). During transport, a mini data logger recorded the accelerations in three dimensions and reported them in arbitrary area under the curve (AUC) units. In addition representative quantities of clinical chemistry, hematology and coagulation were measured and compared in 20 blood sample pairs transported by iPTS and courier. Samples transported by iPTS were brought to the laboratory (300 m) within 30 s without adverse effects on the samples. The information retrieved from the data logger showed a median AUC of 7 and 310 arbitrary units for courier and iPTS transport, respectively. This is considerably below the reported limit for noticeable hemolysis of 500 arbitrary units. iPTS reduces TAT by reducing the hands-on time and a fast transport. No differences in the measurement results were found for any of the investigated 36 analytes between courier and iPTS transport. Based on these findings the iPTS was cleared for clinical use in our hospital.

  15. Assignment methodology for larger RNA oligonucleotides: Application to an ATP-binding RNA aptamer

    International Nuclear Information System (INIS)

    Dieckmann, Thorsten; Feigon, Juli

    1997-01-01

    The use of uniform 13C, 15N labeling in the NMR spectroscopic study of RNA structures has greatly facilitated the assignment process in small RNA oligonucleotides. For ribose spinsystem assignments, exploitation of these labels has followed previously developed methods for the study of proteins. However, for sequential assignment of the exchangeable and nonexchangeable protons of the nucleotides, it has been necessary to develop a variety of new NMR experiments. Even these are of limited utility in the unambiguous assignment of larger RNAs due to the short carbon relaxation times and extensive spectral overlap for all nuclei.These problems can largely be overcome by the additional use of base-type selectively 13C, 15N-labeled RNA in combination with a judicious use of related RNAs with base substitutions. We report the application of this approach to a 36-nucleotide ATP-binding RNA aptamer in complex with AMP. Complete sequential 1H assignments, as well as the majority of 13C and 15N assignments, were obtained

  16. Dual-labeled time-resolved fluoroimmunoassay for simultaneous detection of clothianidin and diniconazole in agricultural samples.

    Science.gov (United States)

    Sheng, Enze; Shi, Haiyan; Zhou, Liangliang; Hua, Xiude; Feng, Lu; Yu, Tong; Wang, Minghua

    2016-02-01

    Europium (Eu(3+)) and samarium (Sm(3+)) were used as fluorescent labels to develop a highly sensitive dual-labeled time-resolved fluoroimmunoassay (TRFIA) for detect clothianidin and diniconazole in food samples. Under the optimized assay conditions, 50% inhibition concentration (IC50) and the limit of detection (LOD, IC10) of clothianidin were 5.08 and 0.021 μg/L, and 13.14 and 0.029 μg/L for diniconazole. The cross-reactivities (CRs) were negligible except dinotefuran (9.4%) and uniconazole (4.28%). The recoveries of clothianidin and diniconazole ranged from 79.3% to 108.7% in food samples. The results of TRFIA for the authentic samples were validated by gas chromatography (GC) analyses, and a satisfactory correlations were obtained. These results indicated that the method was an alternative tool for simultaneous detection of clothianidin and diniconazole in food samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Three dimensional morphological studies of Larger Benthic Foraminifera at the population level using micro computed tomography

    Science.gov (United States)

    Kinoshita, Shunichi; Eder, Wolfgang; Woeger, Julia; Hohenegger, Johann; Briguglio, Antonino; Ferrandez-Canadell, Carles

    2015-04-01

    Symbiont-bearing larger benthic Foraminifera (LBF) are long-living marine (at least 1 year), single-celled organisms with complex calcium carbonate shells. Their morphology has been intensively studied since the middle of the nineteenth century. This led to a broad spectrum of taxonomic results, important from biostratigraphy to ecology in shallow water tropical to warm temperate marine palaeo-environments. However, it was necessary for the traditional investigation methods to cut or destruct specimens for analysing the taxonomically important inner structures. X-ray micro-computed tomography (microCT) is one of the newest techniques used in morphological studies. The greatest advantage is the non-destructive acquisition of inner structures. Furthermore, the running improve of microCT scanners' hard- and software provides high resolution and short time scans well-suited for LBF. Three-dimensional imaging techniques allow to select and extract each chamber and to measure easily its volume, surface and several form parameters used for morphometric analyses. Thus, 3-dimensional visualisation of LBF-tests is a very big step forward from traditional morphology based on 2-dimensional data. The quantification of chamber form is a great opportunity to tackle LBF structures, architectures and the bauplan geometry. The micrometric digital resolution is the only way to solve many controversies in phylogeny and evolutionary trends of LBF. For the present study we used micro-computed tomography to easily investigate the chamber number of every specimen from statistically representative part of populations to estimate population dynamics. Samples of living individuals are collected at monthly intervals from fixed locations. Specific preparation allows to scan up to 35 specimens per scan within 2 hours and to obtain the complete digital dataset for each specimen of the population. MicroCT enables thus a fast and precise count of all chambers built by the foraminifer from its

  18. Optimum time of blood sampling for determination of glomerular filtration rate by single-injection [51Cr]EDTA plasma clearance

    International Nuclear Information System (INIS)

    Broechner-Mortensen, J.; Roedbro, P.

    1976-01-01

    We have investigated the influence on reproducibility of total [ 51 Cr]EDTA plasma clearance (E) of various times and numbers of blood samples in patients with normal (13 patients) and low (14 patients) renal function. The study aims at fixing a clinically useful procedure suitable for all levels of renal function. Six different types of E were evaluated with time periods for blood sampling between 3 and 5 h after tracer injection, and the variation from counting radioactivity, s 1 , was determined as part of the total variation, s 2 . Optimum mean time, t(E), for blood sampling was calculated as a function of E, as the mean time giving the least change in E for a given change in the 'final slope' of the plasma curve. For patients with normal E, s 1 did not contribute significantly to s 2 , and t(E) was about 2h. For patients with low renal function s 1 contributed significantly to s 2 , and t(E) increased steeply with decreasing E. The relative error in s 1 from fixed Etypes was calculated for all levels of renal function. The results indicate that blood sampling individualized according to predicted E values is not necessary. A sufficient precision of E can be achieved for all function levels from three blood samples drawn at 180, 240, and 300 min after injection. (Auth.)

  19. Base stock policies with degraded service to larger orders

    DEFF Research Database (Denmark)

    Du, Bisheng; Larsen, Christian

    We study an inventory system controlled by a base stock policy assuming a compound renewal demand process. We extend the base stock policy by incorporating rules for degrading the service of larger orders. Two specific rules are considered, denoted Postpone(q,t) and Split(q), respectively. The aim...... of using these rules is to achieve a given order fill rate of the regular orders (those of size less than or equal to the parameter q) having less inventory. We develop mathematical expressions for the performance measures order fill rate (of the regular orders) and average on-hand inventory level. Based...

  20. Development of electric discharge equipment for small specimen sampling

    International Nuclear Information System (INIS)

    Okamoto, Koji; Kitagawa, Hideaki; Kusumoto, Junichi; Kanaya, Akihiro; Kobayashi, Toshimi

    2009-01-01

    We have developed the on-site electric discharge sampling equipment that can effectively take samples such as small specimens from the surface portion of the plant components. Compared with the conventional sampling equipment, our sampling equipment can take samples that are thinner in depth and larger in area. In addition, the affection to the equipment can be held down to the minimum, and the thermally-affected zone of the material due to electric discharge is small, which is to be ignored. Therefore, our equipment is excellent in taking samples for various tests such as residual life evaluation.

  1. Soft ionization of saturated hydrocarbons, alcohols and nonpolar compounds by negative-ion direct analysis in real-time mass spectrometry.

    Science.gov (United States)

    Cody, Robert B; Dane, A John

    2013-03-01

    Large polarizable n-alkanes (approximately C18 and larger), alcohols, and other nonpolar compounds can be detected as negative ions when sample solutions are injected directly into the sampling orifice of the atmospheric pressure interface of the time-of-flight mass spectrometer with the direct analysis in real time (DART) ion source operating in negative-ion mode. The mass spectra are dominated by peaks corresponding to [M + O2]‾(•). No fragmentation is observed, making this a very soft ionization technique for samples that are otherwise difficult to analyze by DART. Detection limits for cholesterol were determined to be in the low nanogram range.

  2. Quality characteristics of Moroccan sweet paprika (Capsicum annuum L. at different sampling times

    Directory of Open Access Journals (Sweden)

    Naima Zaki

    2013-09-01

    Full Text Available "La Niora" is a red pepper variety cultivated in Tadla Region (Morocco which is used for manufacturing paprika after sun drying. The paprika quality (nutritional, chemical and microbiological was evaluated immediately after milling, from September to December. Sampling time mainly affected paprika color and the total capsaicinoid and vitamin C contents. The commercial quality was acceptable and no aflatoxins were found, but the microbial load sometimes exceeded permitted levels.

  3. Larger foraminifera from a relict structure off Karwar western Indian continental margin

    Digital Repository Service at National Institute of Oceanography (India)

    Setty, M.G.A.P.

    of such water masses having been present in the region. Among the larger forms, @iAmphistegina bicirculata, A. radiata@@ var. @ipapillosa@@ and @iOperculina ammonoides@@ indicate mixing, while @iNummulites cumingii@@ and @iBorelis schlumbergeri@@ were relict...

  4. Mid-Eocene (Bartonian) larger benthic foraminifera from southeastern Turkey and northeastern Egypt: New evidence for the palaeobiogeography of the Tethyan carbonate platforms

    Science.gov (United States)

    Sallam, Emad S.; Erdem, Nazire Özgen; Sinanoğlu, Derya; Ruban, Dmitry A.

    2018-05-01

    Larger benthic foraminiferal assemblages from the mid-Eocene (Bartonian) sedimentary successions of the Tethyan carbonate platforms have been studied in southeastern Turkey and northeastern Egypt. In the Hazro-Diyarbakir section (SE Turkey), small-medium miliolids and textularinids are identified from the lower intervals of the Hoya Formation, whereas alveolinids and soritids (porcellaneous) and orbitolinids (agglutinated) increase in diversity and abundance in the upper intervals. The Dictyoconus aegyptiensis (Chapman) and Somalina stefaninii Silvestri are recorded for the first time from the Hoya Formation. The larger benthic foraminiferal assemblage from the Hoya Formation shows a significant similarity to those reported from the Observatory Formation (coeval with the Sannor Formation) in the Cairo-Suez district (NE Egypt). The studied foraminiferal assemblages imply restricted lagoonal-tidal flat palaeoenvironments. Palaeobiogeographically, the larger benthic foraminiferal assemblages recorded in southeastern Turkey and northeastern Egypt carbonate platforms display a strong affinity to the Arabian, Middle East and African platforms. The position of the global sea-level and the plate tectonic organization of the studied region during the Bartonian were the main factors that facilitated faunal exchange within the carbonate platforms.

  5. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    Directory of Open Access Journals (Sweden)

    Zheng Hu

    2015-01-01

    Full Text Available High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.

  6. Dynamic failure of dry and fully saturated limestone samples based on incubation time concept

    Directory of Open Access Journals (Sweden)

    Yuri V. Petrov

    2017-02-01

    Full Text Available This paper outlines the results of experimental study of the dynamic rock failure based on the comparison of dry and saturated limestone samples obtained during the dynamic compression and split tests. The tests were performed using the Kolsky method and its modifications for dynamic splitting. The mechanical data (e.g. strength, time and energy characteristics of this material at high strain rates are obtained. It is shown that these characteristics are sensitive to the strain rate. A unified interpretation of these rate effects, based on the structural–temporal approach, is hereby presented. It is demonstrated that the temporal dependence of the dynamic compressive and split tensile strengths of dry and saturated limestone samples can be predicted by the incubation time criterion. Previously discovered possibilities to optimize (minimize the energy input for the failure process is discussed in connection with industrial rock failure processes. It is shown that the optimal energy input value associated with critical load, which is required to initialize failure in the rock media, strongly depends on the incubation time and the impact duration. The optimal load shapes, which minimize the momentum for a single failure impact, are demonstrated. Through this investigation, a possible approach to reduce the specific energy required for rock cutting by means of high-frequency vibrations is also discussed.

  7. Correlated Amino Acid and Mineralogical Analyses of Milligram and Submilligram Samples of Carbonaceous Chondrite Lonewolf Nunataks 94101

    Science.gov (United States)

    Burton, S.; Berger, E. L.; Locke, D. R.; Lewis, E. K.

    2018-01-01

    Amino acids, the building blocks of proteins, have been found to be indigenous in the eight carbonaceous chondrite groups. The abundances, structural, enantiomeric and isotopic compositions of amino acids differ significantly among meteorites of different groups and petrologic types. These results suggest parent-body conditions (thermal or aqueous alteration), mineralogy, and the preservation of amino acids are linked. Previously, elucidating specific relationships between amino acids and mineralogy was not possible because the samples analyzed for amino acids were much larger than the scale at which petrologic heterogeneity is observed (sub mm-scale differences corresponding to sub-mg samples); for example, Pizzarello and coworkers measured amino acid abundances and performed X-ray diffraction (XRD) on several samples of the Murchison meteorite, but these analyses were performed on bulk samples that were 500 mg or larger. Advances in the sensitivity of amino acid measurements by liquid chromatography with fluorescence detection/time-of-flight mass spectrometry (LC-FD/TOF-MS), and application of techniques such as high resolution X-ray diffraction (HR-XRD) and scanning electron microscopy (SEM) with energy dispersive spectroscopy (EDS) for mineralogical characterizations have now enabled coordinated analyses on the scale at which mineral heterogeneity is observed. In this work, we have analyzed samples of the Lonewolf Nunataks (LON) 94101 CM2 carbonaceous chondrite. We are investigating the link(s) between parent body processes, mineralogical context, and amino acid compositions in meteorites on bulk samples (approx. 20mg) and mineral separates (< or = 3mg) from several of spatial locations within our allocated samples. Preliminary results of these analyses are presented here.

  8. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    Science.gov (United States)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.

  9. A ‘smart’ tube holder enables real-time sample monitoring in a standard lab centrifuge

    Science.gov (United States)

    Hoang, Tony; Moskwa, Nicholas

    2018-01-01

    The centrifuge is among the oldest and most widely used pieces of laboratory equipment, with significant applications that include clinical diagnostics and biomedical research. A major limitation of laboratory centrifuges is their “black box” nature, limiting sample observation to before and after centrifugation. Thus, optimized protocols require significant trial and error, while unoptimized protocols waste time by centrifuging longer than necessary or material due to incomplete sedimentation. Here, we developed an instrumented centrifuge tube receptacle compatible with several commercial benchtop centrifuges that can provide real-time sample analysis during centrifugation. We demonstrated the system by monitoring cell separations during centrifugation for different spin speeds, concentrations, buffers, cell types, and temperatures. We show that the collected data are valuable for analytical purposes (e.g. quality control), or as feedback to the user or the instrument. For the latter, we verified an adaptation where complete sedimentation turned off the centrifuge and notified the user by a text message. Our system adds new functionality to existing laboratory centrifuges, saving users time and providing useful feedback. This add-on potentially enables new analytical applications for an instrument that has remained largely unchanged for decades. PMID:29659624

  10. A note on exponential dispersion models which are invariant under length-biased sampling

    NARCIS (Netherlands)

    Bar-Lev, S.K.; van der Duyn Schouten, F.A.

    2003-01-01

    Length-biased sampling situations may occur in clinical trials, reliability, queueing models, survival analysis and population studies where a proper sampling frame is absent.In such situations items are sampled at rate proportional to their length so that larger values of the quantity being

  11. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  12. Sample preparation for phosphoproteomic analysis of circadian time series in Arabidopsis thaliana.

    Science.gov (United States)

    Krahmer, Johanna; Hindle, Matthew M; Martin, Sarah F; Le Bihan, Thierry; Millar, Andrew J

    2015-01-01

    Systems biological approaches to study the Arabidopsis thaliana circadian clock have mainly focused on transcriptomics while little is known about the proteome, and even less about posttranslational modifications. Evidence has emerged that posttranslational protein modifications, in particular phosphorylation, play an important role for the clock and its output. Phosphoproteomics is the method of choice for a large-scale approach to gain more knowledge about rhythmic protein phosphorylation. Recent plant phosphoproteomics publications have identified several thousand phosphopeptides. However, the methods used in these studies are very labor-intensive and therefore not suitable to apply to a well-replicated circadian time series. To address this issue, we present and compare different strategies for sample preparation for phosphoproteomics that are compatible with large numbers of samples. Methods are compared regarding number of identifications, variability of quantitation, and functional categorization. We focus on the type of detergent used for protein extraction as well as methods for its removal. We also test a simple two-fraction separation of the protein extract. © 2015 Elsevier Inc. All rights reserved.

  13. RNA-seq: technical variability and sampling

    Science.gov (United States)

    2011-01-01

    Background RNA-seq is revolutionizing the way we study transcriptomes. mRNA can be surveyed without prior knowledge of gene transcripts. Alternative splicing of transcript isoforms and the identification of previously unknown exons are being reported. Initial reports of differences in exon usage, and splicing between samples as well as quantitative differences among samples are beginning to surface. Biological variation has been reported to be larger than technical variation. In addition, technical variation has been reported to be in line with expectations due to random sampling. However, strategies for dealing with technical variation will differ depending on the magnitude. The size of technical variance, and the role of sampling are examined in this manuscript. Results In this study three independent Solexa/Illumina experiments containing technical replicates are analyzed. When coverage is low, large disagreements between technical replicates are apparent. Exon detection between technical replicates is highly variable when the coverage is less than 5 reads per nucleotide and estimates of gene expression are more likely to disagree when coverage is low. Although large disagreements in the estimates of expression are observed at all levels of coverage. Conclusions Technical variability is too high to ignore. Technical variability results in inconsistent detection of exons at low levels of coverage. Further, the estimate of the relative abundance of a transcript can substantially disagree, even when coverage levels are high. This may be due to the low sampling fraction and if so, it will persist as an issue needing to be addressed in experimental design even as the next wave of technology produces larger numbers of reads. We provide practical recommendations for dealing with the technical variability, without dramatic cost increases. PMID:21645359

  14. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Science.gov (United States)

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency

  15. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Directory of Open Access Journals (Sweden)

    Deconinck Hedwig

    2008-05-01

    Full Text Available Abstract The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster and a 67 × 3 cluster design (67 clusters, 3 observations per cluster. Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data

  16. Effects of storage time and temperature on pH, specific gravity, and crystal formation in urine samples from dogs and cats.

    Science.gov (United States)

    Albasan, Hasan; Lulich, Jody P; Osborne, Carl A; Lekcharoensuk, Chalermpol; Ulrich, Lisa K; Carpenter, Kathleen A

    2003-01-15

    To determine effects of storage temperature and time on pH and specific gravity of and number and size of crystals in urine samples from dogs and cats. Randomized complete block design. 31 dogs and 8 cats. Aliquots of each urine sample were analyzed within 60 minutes of collection or after storage at room or refrigeration temperatures (20 vs 6 degrees C [68 vs 43 degrees F]) for 6 or 24 hours. Crystals formed in samples from 11 of 39 (28%) animals. Calcium oxalate (CaOx) crystals formed in vitro in samples from 1 cat and 8 dogs. Magnesium ammonium phosphate (MAP) crystals formed in vitro in samples from 2 dogs. Compared with aliquots stored at room temperature, refrigeration increased the number and size of crystals that formed in vitro; however, the increase in number and size of MAP crystals in stored urine samples was not significant. Increased storage time and decreased storage temperature were associated with a significant increase in number of CaOx crystals formed. Greater numbers of crystals formed in urine aliquots stored for 24 hours than in aliquots stored for 6 hours. Storage time and temperature did not have a significant effect on pH or specific gravity. Urine samples should be analyzed within 60 minutes of collection to minimize temperature- and time-dependent effects on in vitro crystal formation. Presence of crystals observed in stored samples should be validated by reevaluation of fresh urine.

  17. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    International Nuclear Information System (INIS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-01-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors. (paper)

  18. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    Science.gov (United States)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  19. Investigating the tradeoffs between spatial resolution and diffusion sampling for brain mapping with diffusion tractography: time well spent?

    Science.gov (United States)

    Calabrese, Evan; Badea, Alexandra; Coe, Christopher L; Lubach, Gabriele R; Styner, Martin A; Johnson, G Allan

    2014-11-01

    Interest in mapping white matter pathways in the brain has peaked with the recognition that altered brain connectivity may contribute to a variety of neurologic and psychiatric diseases. Diffusion tractography has emerged as a popular method for postmortem brain mapping initiatives, including the ex-vivo component of the human connectome project, yet it remains unclear to what extent computer-generated tracks fully reflect the actual underlying anatomy. Of particular concern is the fact that diffusion tractography results vary widely depending on the choice of acquisition protocol. The two major acquisition variables that consume scan time, spatial resolution, and diffusion sampling, can each have profound effects on the resulting tractography. In this analysis, we determined the effects of the temporal tradeoff between spatial resolution and diffusion sampling on tractography in the ex-vivo rhesus macaque brain, a close primate model for the human brain. We used the wealth of autoradiography-based connectivity data available for the rhesus macaque brain to assess the anatomic accuracy of six time-matched diffusion acquisition protocols with varying balance between spatial and diffusion sampling. We show that tractography results vary greatly, even when the subject and the total acquisition time are held constant. Further, we found that focusing on either spatial resolution or diffusion sampling at the expense of the other is counterproductive. A balanced consideration of both sampling domains produces the most anatomically accurate and consistent results. Copyright © 2014 Wiley Periodicals, Inc.

  20. Sample acceptance time criteria, electronic issue and alloimmunisation in thalassaemia.

    Science.gov (United States)

    Trompeter, S; Baxter, L; McBrearty, M; Zatkya, E; Porter, J

    2015-12-01

    To determine the safety of a 1-week acceptance criteria of sample receipt in laboratory to transfusion commencement in transfusion dependent thalassaemia with respect to alloimmunisation. To determine the safety of electronic issue of blood components in such a setting. Retrospective audit of alloimmunisation (1999-2012) and blood exposure in registered thalassaemia patients at a central London thalassaemia centre where the acceptance criteria for the group and save sample from arrival in the laboratory to the time of issue of blood for transfusion for someone who has been transfused in the last 28 days was 1 week, and there was electronic issue protocol for patients who have always had a negative antibody screen (other than temporary positivity in pregnant women receiving prophylactic anti-D or anti Le-a, Anti Le-b and Anti P1 that are no longer detectable). There were 133 patients with thalassemia variants regularly attending UCLH for review. A total of 105 patients had transfusion dependent thalassaemia (TDT) (7 E-beta thalassaemia, 98 beta thalassaemia major). Ten of the 84 patients who received their transfusions at UCLH were alloimmunised. Seven of them had been alloimmunised prior to arrival at UCLH. Only two patients developed antibodies at UCLH during this period. The prevalence of alloantibody formation of 2% in UCLH transfused patients, with presumptive incidence of 0.01 alloantibodies per 100 units or 0·001 immunisations per person per year compares favourably with other reported series and suggests that 1 week interval with appropriate electronic issue is acceptable practice. © 2015 British Blood Transfusion Society.

  1. Larger foraminifera distribution on a mesotrophic carbonate shelf in SW Sulawesi (Indonesia)

    NARCIS (Netherlands)

    Renema, W.; Troelstra, S.R.

    2001-01-01

    Larger symbiont bearing foraminifera typically live in shallow tropical seas. In this study the fauna composition of patch reefs scattered over the Spermonde Shelf (SW Sulawesi, Indonesia), a mesotrophic carbonate shelf, is examined. The foraminiferal fauna of the Spermonde Shelf is characterised by

  2. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (AV...

  3. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  4. Measurement of radon-222 concentration in environment sampled within short time using charcoal detector

    International Nuclear Information System (INIS)

    Yamasaki, Tadashi; Sekiyama, Shigenobu; Tokin, Mina; Nakayasu, Yumiko; Watanabe, Tamaki.

    1994-01-01

    The concentration of 222 Rn in air sampled within a very short period of time was measured using activated charcoal as the adsorber. The detector is the plastic canister containing mixture of the activated charcoal and the silica gel. The radon gas was adsorbed in the charcoal in the radon chamber at the temperature of 25degC. A little amount of liquid scintillation cocktail was added into the vial of liquid scintillation counter with the canister. The radon in the charcoal was extracted in the liquid scintillation cocktail. Alpha particles emitted from radon and its daughter nuclei in the cocktail were detected using the liquid scintillation counter. Present method has advantages of not only short sampling time of air but also adsorption of radon in charcoal under a constant temperature. The concentration of radon in air down to 2 Bq/m 3 could be detected. A kinetic model for adsorption of radon in the charcoal is also presented. The ratio of radon concentration in the charcoal to that in air under the equilibrium state of adsorption was estimated to be from 6.1 to 6.8 m 3 /kg at the temperature of 25degC. (author)

  5. Detection of 12 respiratory viruses by duplex real time PCR assays in respiratory samples.

    Science.gov (United States)

    Arvia, Rosaria; Corcioli, Fabiana; Ciccone, Nunziata; Della Malva, Nunzia; Azzi, Alberta

    2015-12-01

    Different viruses can be responsible for similar clinical manifestations of respiratory infections. Thus, the etiological diagnosis of respiratory viral diseases requires the detection of a large number of viruses. In this study, 6 duplex real-time PCR assays, using EvaGreen intercalating dye, were developed to detect 12 major viruses responsible for respiratory diseases: influenza A and B viruses, enteroviruses (including enterovirus spp, and rhinovirus spp), respiratory syncytial virus, human metapneumovirus, coronaviruses group I (of which CoV 229E and CoV NL63 are part) and II (including CoV OC43 and CoV HKU1), parainfluenza viruses type 1, 2, 3 and 4, human adenoviruses and human bocaviruses. The 2 target viruses of each duplex reaction were distinguishable by the melting temperatures of their amplicons. The 6 duplex real time PCR assays were applied for diagnostic purpose on 202 respiratory samples from 157 patients. One hundred fifty-seven samples were throat swabs and 45 were bronchoalveolar lavages. The results of the duplex PCR assays were confirmed by comparison with a commercial, validated, assay; in addition, the positive results were confirmed by sequencing. The analytical sensitivity of the duplex PCR assays varied from 10(3) copies/ml to 10(4) copies/ml. For parainfluenza virus 2 only it was 10(5) copies/ml. Seventy clinical samples (35%) from 55 patients (30 children and 25 adults) were positive for 1 or more viruses. In adult patients, influenza A virus was the most frequently detected respiratory virus followed by rhinoviruses. In contrast, respiratory syncytial virus was the most common virus in children, followed by enteroviruses, influenza A virus and coronavirus NL63. The small number of samples/patients does not allow us to draw any epidemiological conclusion. Altogether, the results of this study indicate that the 6 duplex PCR assays described in this study are sensitive, specific and cost-effective. Thus, this assay could be

  6. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  7. Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR.

    Science.gov (United States)

    Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio; Olsen, John Emerdhal; Manfreda, Gerardo

    2014-08-01

    Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40% of contaminated pools vs 12% using ISO 6579. The results were used to estimate the lowest number of sample units needed to be tested in order to have a 95% certainty not falsely to accept a contaminated lot by Monte Carlo simulation. According to this simulation, at least 16 pools of 10 eggs each are needed to be tested by ISO 6579 in order to obtain this confidence level, while the minimum number of pools to be tested was reduced to 8 pools of 9 eggs each, when real-time PCR was applied as analytical method. This result underlines the importance of including analytical methods with higher sensitivity in order to improve the efficiency of sampling and reduce the number of samples to be tested. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Open Probe fast GC-MS - combining ambient sampling ultra-fast separation and in-vacuum ionization for real-time analysis.

    Science.gov (United States)

    Keshet, U; Alon, T; Fialkov, A B; Amirav, A

    2017-07-01

    An Open Probe inlet was combined with a low thermal mass ultra-fast gas chromatograph (GC), in-vacuum electron ionization ion source and a mass spectrometer (MS) of GC-MS for obtaining real-time analysis with separation. The Open Probe enables ambient sampling via sample vaporization in an oven that is open to room air, and the ultra-fast GC provides ~30-s separation, while if no separation is required, it can act as a transfer line with 2 to 3-s sample transfer time. Sample analysis is as simple as touching the sample, pushing the sample holder into the Open Probe oven and obtaining the results in 30 s. The Open Probe fast GC was mounted on a standard Agilent 7890 GC that was coupled with an Agilent 5977A MS. Open Probe fast GC-MS provides real-time analysis combined with GC separation and library identification, and it uses the low-cost MS of GC-MS. The operation of Open Probe fast GC-MS is demonstrated in the 30-s separation and 50-s full analysis cycle time of tetrahydrocannabinol and cannabinol in Cannabis flower, sub 1-min analysis of trace trinitrotoluene transferred from a finger onto a glass surface, vitamin E in canola oil, sterols in olive oil, polybrominated flame retardants in plastics, alprazolam in Xanax drug pill and free fatty acids and cholesterol in human blood. The extrapolated limit of detection for pyrene is Open Probe fast GC-MS is demonstrated in the analysis of heroin in its street drug powder. The use of Open Probe with the fast GC acting as a transfer line is demonstrated in <10-s analysis without separation of ibuprofen and estradiol. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Quantification of Campylobacter spp. in chicken rinse samples by using flotation prior to real-time PCR

    DEFF Research Database (Denmark)

    Wolffs, Petra; Norling, Börje; Hoorfar, Jeffrey

    2005-01-01

    Real-time PCR is fast, sensitive, specific, and can deliver quantitative data; however, two disadvantages are that this technology is sensitive to inhibition by food and that it does not distinguish between DNA originating from viable, viable nonculturable (VNC), and dead cells. For this reason......, real-time PCR has been combined with a novel discontinuous buoyant density gradient method, called flotation, in order to allow detection of only viable and VNC cells of thermotolerant campylobacters in chicken rinse samples. Studying the buoyant densities of different Campylobacter spp. showed...... enrichment and amounts as low as 2.6 X 10(3) CFU/ml could be quantified. Furthermore, subjecting viable cells and dead cells to flotation showed that viable cells were recovered after flotation treatment but that dead cells and/or their DNA was not detected. Also, when samples containing VNC cells mixed...

  10. MDMA-assisted psychotherapy using low doses in a small sample of women with chronic posttraumatic stress disorder.

    Science.gov (United States)

    Bouso, José Carlos; Doblin, Rick; Farré, Magí; Alcázar, Miguel Angel; Gómez-Jarabo, Gregorio

    2008-09-01

    The purpose of this study was to investigate the safety of different doses of MDMA-assisted psychotherapy administered in a psychotherapeutic setting to women with chronic PTSD secondary to a sexual assault, and also to obtain preliminary data regarding efficacy. Although this study was originally planned to include 29 subjects, political pressures led to the closing of the study before it could be finished, at which time only six subjects had been treated. Preliminary results from those six subjects are presented here. We found that low doses of MDMA (between 50 and 75 mg) were both psychologically and physiologically safe for all the subjects. Future studies in larger samples and using larger doses are needed in order to further clarify the safety and efficacy of MDMA in the clinical setting in subjects with PTSD.

  11. Time and temperature affect glycolysis in blood samples regardless of fluoride-based preservatives: a potential underestimation of diabetes.

    Science.gov (United States)

    Stapleton, Mary; Daly, Niamh; O'Kelly, Ruth; Turner, Michael J

    2017-11-01

    Background The inhibition of glycolysis prior to glucose measurement is an important consideration when interpreting glucose tolerance tests. This is particularly important in gestational diabetes mellitus where prompt diagnosis and treatment is essential. A study was planned to investigate the effect of preservatives and temperature on glycolysis. Methods Blood samples for glucose were obtained from consented females. Lithium heparin and fluoride-EDTA samples transported rapidly in ice slurry to the laboratory were analysed for glucose concentration and then held either in ice slurry or at room temperature for varying time intervals. Paired fluoride-citrate samples were received at room temperature and held at room temperature, with analysis at similar time intervals. Results No significant difference was noted between mean glucose concentrations when comparing different sample types received in ice slurry. The mean glucose concentrations decreased significantly for both sets of samples when held at room temperature (0.4 mmol/L) and in ice slurry (0.2 mmol/L). A review of patient glucose tolerance tests reported in our hospital indicated that 17.8% exceeded the recommended diagnostic criteria for gestational diabetes mellitus. It was predicted that if the results of fasting samples were revised to reflect the effect of glycolysis at room temperature, the adjusted diagnostic rate could increase to 35.3%. Conclusion Preanalytical handling of blood samples for glucose analysis is vital. Fluoride-EDTA is an imperfect antiglycolytic, even when the samples are transported and analysed rapidly provides such optimal conditions. The use of fluoride-citrate tubes may offer a viable alternative in the diagnosis of diabetes mellitus.

  12. Detection of Mycoplasma genitalium in female cervical samples by Multitarget Real-Time PCR

    Directory of Open Access Journals (Sweden)

    Sabina Mahmutović-Vranić

    2007-05-01

    Full Text Available Mycoplasma genitalum (MG is associated with variety of urogenital infections such as non-gonococcal urethritis (NGU, endometritis and cervicitis. The objective of this study was to demonstrate and evaluate a research polymerase chain reaction (PCR assay, for the detection of MG in cervical samples of a tested population of women attending gynecology clinics in Bosnia and Herzegovina. The Multitarget Real-Time (MTRT PCR, utilizing the ABI 7900HT, the sequence detection system, was performed for the detection of MG. Cervical samples (N=97 from females were divided into three types of patient groups: Group 1: patients who had known abnormal clinical cytology reports (N=34; Group 2: patients who reported a history of genitourinary infections (N=22; and Group 3: patients not in either groups 1 or 2 (N=41. Overall, 14,43% (14/97 of those tested were positive for MG. A positive sample was defined as having a cycle threshold cross point (Ct < 40,0 with a fluorescent detection comparable to the low positive control utilized during the run. This study validated the use of MTRT PCR as a reliable method for the detection of MG in clinical specimens and should facilitate large-scale screening for this organism.

  13. Robust Adaptive Stabilization of Linear Time-Invariant Dynamic Systems by Using Fractional-Order Holds and Multirate Sampling Controls

    Directory of Open Access Journals (Sweden)

    S. Alonso-Quesada

    2010-01-01

    Full Text Available This paper presents a strategy for designing a robust discrete-time adaptive controller for stabilizing linear time-invariant (LTI continuous-time dynamic systems. Such systems may be unstable and noninversely stable in the worst case. A reduced-order model is considered to design the adaptive controller. The control design is based on the discretization of the system with the use of a multirate sampling device with fast-sampled control signal. A suitable on-line adaptation of the multirate gains guarantees the stability of the inverse of the discretized estimated model, which is used to parameterize the adaptive controller. A dead zone is included in the parameters estimation algorithm for robustness purposes under the presence of unmodeled dynamics in the controlled dynamic system. The adaptive controller guarantees the boundedness of the system measured signal for all time. Some examples illustrate the efficacy of this control strategy.

  14. 78 FR 18902 - Defining Larger Participants of the Student Loan Servicing Market

    Science.gov (United States)

    2013-03-28

    ... BUREAU OF CONSUMER FINANCIAL PROTECTION 12 CFR Part 1090 [Docket No. CFPB-2013-0005] RIN 3170-AA35... Protection. ACTION: Proposed rule; request for public comment. SUMMARY: The Bureau of Consumer Financial Protection (Bureau or CFPB) proposes to amend the regulation defining larger participants of certain consumer...

  15. Larger Gray Matter Volume in the Basal Ganglia of Heavy Cannabis Users Detected by Voxel-Based Morphometry and Subcortical Volumetric Analysis

    Directory of Open Access Journals (Sweden)

    Ana Moreno-Alcázar

    2018-05-01

    Full Text Available Background: Structural imaging studies of cannabis users have found evidence of both cortical and subcortical volume reductions, especially in cannabinoid receptor-rich regions such as the hippocampus and amygdala. However, the findings have not been consistent. In the present study, we examined a sample of adult heavy cannabis users without other substance abuse to determine whether long-term use is associated with brain structural changes, especially in the subcortical regions.Method: We compared the gray matter volume of 14 long-term, heavy cannabis users with non-using controls. To provide robust findings, we conducted two separate studies using two different MRI techniques. Each study used the same sample of cannabis users and a different control group, respectively. Both control groups were independent of each other. First, whole-brain voxel-based morphometry (VBM was used to compare the cannabis users against 28 matched controls (HC1 group. Second, a volumetric analysis of subcortical regions was performed to assess differences between the cannabis users and a sample of 100 matched controls (HC2 group obtained from a local database of healthy volunteers.Results: The VBM study revealed that, compared to the control group HC1, the cannabis users did not show cortical differences nor smaller volume in any subcortical structure but showed a cluster (p < 0.001 of larger GM volume in the basal ganglia, involving the caudate, putamen, pallidum, and nucleus accumbens, bilaterally. The subcortical volumetric analysis revealed that, compared to the control group HC2, the cannabis users showed significantly larger volumes in the putamen (p = 0.001 and pallidum (p = 0.0015. Subtle trends, only significant at the uncorrected level, were also found in the caudate (p = 0.05 and nucleus accumbens (p = 0.047.Conclusions: This study does not support previous findings of hippocampal and/or amygdala structural changes in long-term, heavy cannabis users. It

  16. Action video game players and deaf observers have larger Goldmann visual fields.

    Science.gov (United States)

    Buckley, David; Codina, Charlotte; Bhardwaj, Palvi; Pascalis, Olivier

    2010-03-05

    We used Goldmann kinetic perimetry to compare how training and congenital auditory deprivation may affect the size of the visual field. We measured the ability of action video game players and deaf observers to detect small moving lights at various locations in the central (around 30 degrees from fixation) and peripheral (around 60 degrees ) visual fields. Experiment 1 found that 10 habitual video game players showed significantly larger central and peripheral field areas than 10 controls. In Experiment 2 we found that 13 congenitally deaf observers had significantly larger visual fields than 13 hearing controls for both the peripheral and central fields. Here the greatest differences were found in the lower parts of the fields. Comparison of the two groups showed that whereas VGP players have a more uniform increase in field size in both central and peripheral fields deaf observers show non-uniform increases with greatest increases in lower parts of the visual field.

  17. Predicting Time Spent in Treatment in a Sample of Danish Survivors of Child Sexual Abuse.

    Science.gov (United States)

    Fletcher, Shelley; Elklit, Ask; Shevlin, Mark; Armour, Cherie

    2017-07-01

    The aim of this study was to identify significant predictors of length of time spent in treatment. In a convenience sample of 439 Danish survivors of child sexual abuse, predictors of time spent in treatment were examined. Assessments were conducted on a 6-month basis over a period of 18 months. A multinomial logistic regression analysis revealed that the experience of neglect in childhood and having experienced rape at any life stage were associated with less time in treatment. Higher educational attainment and being male were associated with staying in treatment for longer periods of time. These factors may be important for identifying those at risk of terminating treatment prematurely. It is hoped that a better understanding of the factors that predict time spent in treatment will help to improve treatment outcomes for individuals who are at risk of dropping out of treatment at an early stage.

  18. Scheduling whole-air samples above the Trade Wind Inversion from SUAS using real-time sensors

    Science.gov (United States)

    Freer, J. E.; Greatwood, C.; Thomas, R.; Richardson, T.; Brownlow, R.; Lowry, D.; MacKenzie, A. R.; Nisbet, E. G.

    2015-12-01

    Small Unmanned Air Systems (SUAS) are increasingly being used in science applications for a range of applications. Here we explore their use to schedule the sampling of air masses up to 2.5km above ground using computer controlled bespoked Octocopter platforms. Whole-air sampling is targeted above, within and below the Trade Wind Inversion (TWI). On-board sensors profiled the TWI characteristics in real time on ascent and, hence, guided the altitudes at which samples were taken on descent. The science driver for this research is investigation of the Southern Methane Anomaly and, more broadly, the hemispheric-scale transport of long-lived atmospheric tracers in the remote troposphere. Here we focus on the practical application of SUAS for this purpose. Highlighting the need for mission planning, computer control, onboard sensors and logistics in deploying such technologies for out of line-of-sight applications. We show how such a platform can be deployed successfully, resulting in some 60 sampling flights within a 10 day period. Challenges remain regarding the deployment of such platforms routinely and cost-effectively, particularly regarding training and support. We present some initial results from the methane sampling and its implication for exploring and understanding the Southern Methane Anomaly.

  19. Recipe Book for Larger Benthic Foraminifera X-ray Investigation: a Process Approach

    Science.gov (United States)

    Wolfgring, E.; Briguglio, A.; Hohenegger, J.

    2012-04-01

    During the past years X-ray microtomography (microCT) has become an essential tool in imaging procedures in micropaleontology. Apart from highest standards in accuracy, well conducted microCT scans aim to resolve the whole specimen in constant quality and free from any artifacts or visual interferences. Normally, to get used to X-ray techniques and get usable results, countless attempts are needed, resulting in enormous waste of time. This work tries to provide an insight into how best exploitable results can be obtained from the scanning process concerning Larger Benthic Foraminifera (LBF). As each specimen features different characteristics regarding substantial composition, density and conservation status, it is impossible and probably erroneous to give standardized guidelines even within this systematic group. Depending on the attributes of the specimen and on the desired visualization, several details have to be taken into account. Samples preparation: to get sharp images the X-ray has to cross the specimen along its shortest diameter, for LBF the equatorial view is almost always the best positioning (not for alveolinids!). The container itself has to be chosen wisely as well; it must not affect a flawless penetration of the specimen by the X-ray and has to provide a high degree of stability. Small plastic pipettes are perfect to store the specimen (or specimens) and some cardboard may help in keeping the position. The nature and quality of the paste used to fixate the object and its container are essential in ensuring a smooth rotation of the specimen which is inevitable for the consistent quality of the image and to avoid vibrations. Scan parameters: beside the correct choice of dedicated filters (which are always different depending on the working station), settings for kv, µA and resolution might have to be revised for each new object to deliver optimal results. Standard values for hyaline forms with empty chambers are normally around 80 Kv and 100 u

  20. Meeting Air Transportation Demand in 2025 by Using Larger Aircraft and Alternative Routing to Complement NextGen Operational Improvements

    Science.gov (United States)

    Smith, Jeremy C.; Guerreiro, Nelson M.; Viken, Jeffrey K.; Dollyhigh, Samuel M.; Fenbert, James W.

    2010-01-01

    A study was performed that investigates the use of larger aircraft and alternative routing to complement the capacity benefits expected from the Next Generation Air Transportation System (NextGen) in 2025. National Airspace System (NAS) delays for the 2025 demand projected by the Transportation Systems Analysis Models (TSAM) were assessed using NASA s Airspace Concept Evaluation System (ACES). The shift in demand from commercial airline to automobile and from one airline route to another was investigated by adding the route delays determined from the ACES simulation to the travel times used in the TSAM and re-generating new flight scenarios. The ACES simulation results from this study determined that NextGen Operational Improvements alone do not provide sufficient airport capacity to meet the projected demand for passenger air travel in 2025 without significant system delays. Using larger aircraft with more seats on high-demand routes and introducing new direct routes, where demand warrants, significantly reduces delays, complementing NextGen improvements. Another significant finding of this study is that the adaptive behavior of passengers to avoid congested airline-routes is an important factor when projecting demand for transportation systems. Passengers will choose an alternative mode of transportation or alternative airline routes to avoid congested routes, thereby reducing delays to acceptable levels for the 2025 scenario; the penalty being that alternative routes and the option to drive increases overall trip time by 0.4% and may be less convenient than the first-choice route.

  1. Local activation time sampling density for atrial tachycardia contact mapping: how much is enough?

    Science.gov (United States)

    Williams, Steven E; Harrison, James L; Chubb, Henry; Whitaker, John; Kiedrowicz, Radek; Rinaldi, Christopher A; Cooklin, Michael; Wright, Matthew; Niederer, Steven; O'Neill, Mark D

    2018-02-01

    Local activation time (LAT) mapping forms the cornerstone of atrial tachycardia diagnosis. Although anatomic and positional accuracy of electroanatomic mapping (EAM) systems have been validated, the effect of electrode sampling density on LAT map reconstruction is not known. Here, we study the effect of chamber geometry and activation complexity on optimal LAT sampling density using a combined in silico and in vivo approach. In vivo 21 atrial tachycardia maps were studied in three groups: (1) focal activation, (2) macro-re-entry, and (3) localized re-entry. In silico activation was simulated on a 4×4cm atrial monolayer, sampled randomly at 0.25-10 points/cm2 and used to re-interpolate LAT maps. Activation patterns were studied in the geometrically simple porcine right atrium (RA) and complex human left atrium (LA). Activation complexity was introduced into the porcine RA by incomplete inter-caval linear ablation. In all cases, optimal sampling density was defined as the highest density resulting in minimal further error reduction in the re-interpolated maps. Optimal sampling densities for LA tachycardias were 0.67 ± 0.17 points/cm2 (focal activation), 1.05 ± 0.32 points/cm2 (macro-re-entry) and 1.23 ± 0.26 points/cm2 (localized re-entry), P = 0.0031. Increasing activation complexity was associated with increased optimal sampling density both in silico (focal activation 1.09 ± 0.14 points/cm2; re-entry 1.44 ± 0.49 points/cm2; spiral-wave 1.50 ± 0.34 points/cm2, P density (0.61 ± 0.22 points/cm2 vs. 1.0 ± 0.34 points/cm2, P = 0.0015). Optimal sampling densities can be identified to maximize diagnostic yield of LAT maps. Greater sampling density is required to correctly reveal complex activation and represent activation across complex geometries. Overall, the optimal sampling density for LAT map interpolation defined in this study was ∼1.0-1.5 points/cm2. Published on behalf of the European Society of

  2. Investigation of Larger Poly(α-Methylstyrene) Mandrels for High Gain Designs Using Microencapsulation

    International Nuclear Information System (INIS)

    Takagi, Masaru; Cook, Robert; McQuillan, Barry; Gibson, Jane; Paguio, Sally

    2004-01-01

    In recent years we have demonstrated that 2-mm-diameter poly(α-methylstyrene) mandrels meeting indirect drive NIF surface symmetry specifications can be produced using microencapsulation methods. Recently higher gain target designs have been introduced that rely on frequency doubled (green) laser energy and require capsules up to 4 mm in diameter, nominally meeting the same surface finish and symmetry requirements as the existing 2-mm-diameter capsule designs. Direct drive on the NIF also requires larger capsules. In order to evaluate whether the current microencapsulation-based mandrel fabrication techniques will adequately scale to these larger capsules, we have explored extending the techniques to 4-mm-diameter capsules. We find that microencapsulated shells meeting NIF symmetry specifications can be produced, the processing changes necessary to accomplish this are presented here

  3. Reproducibility of preclinical animal research improves with heterogeneity of study samples

    Science.gov (United States)

    Vogt, Lucile; Sena, Emily S.; Würbel, Hanno

    2018-01-01

    Single-laboratory studies conducted under highly standardized conditions are the gold standard in preclinical animal research. Using simulations based on 440 preclinical studies across 13 different interventions in animal models of stroke, myocardial infarction, and breast cancer, we compared the accuracy of effect size estimates between single-laboratory and multi-laboratory study designs. Single-laboratory studies generally failed to predict effect size accurately, and larger sample sizes rendered effect size estimates even less accurate. By contrast, multi-laboratory designs including as few as 2 to 4 laboratories increased coverage probability by up to 42 percentage points without a need for larger sample sizes. These findings demonstrate that within-study standardization is a major cause of poor reproducibility. More representative study samples are required to improve the external validity and reproducibility of preclinical animal research and to prevent wasting animals and resources for inconclusive research. PMID:29470495

  4. Role of time series sediment traps in understanding the Indian summer monsoon

    Digital Repository Service at National Institute of Oceanography (India)

    Guptha, M.V.S.

    particles or sediment or larger accumulations called marine snow – which are comprised of organic matter, dead organisms, tiny shells, dust particles, minerals etc. These sediment traps are capable of collecting settling particles hourly to yearly time... were demonstrated by Honjo (1982) and Deuser et al. (1983). Analysis of the samples helps in understanding how fast various particulate matter, nutrients etc move from the ocean surface to the deep ocean. Because, these materials are largely used...

  5. Information sampling behavior with explicit sampling costs

    Science.gov (United States)

    Juni, Mordechai Z.; Gureckis, Todd M.; Maloney, Laurence T.

    2015-01-01

    The decision to gather information should take into account both the value of information and its accrual costs in time, energy and money. Here we explore how people balance the monetary costs and benefits of gathering additional information in a perceptual-motor estimation task. Participants were rewarded for touching a hidden circular target on a touch-screen display. The target’s center coincided with the mean of a circular Gaussian distribution from which participants could sample repeatedly. Each “cue” — sampled one at a time — was plotted as a dot on the display. Participants had to repeatedly decide, after sampling each cue, whether to stop sampling and attempt to touch the hidden target or continue sampling. Each additional cue increased the participants’ probability of successfully touching the hidden target but reduced their potential reward. Two experimental conditions differed in the initial reward associated with touching the hidden target and the fixed cost per cue. For each condition we computed the optimal number of cues that participants should sample, before taking action, to maximize expected gain. Contrary to recent claims that people gather less information than they objectively should before taking action, we found that participants over-sampled in one experimental condition, and did not significantly under- or over-sample in the other. Additionally, while the ideal observer model ignores the current sample dispersion, we found that participants used it to decide whether to stop sampling and take action or continue sampling, a possible consequence of imperfect learning of the underlying population dispersion across trials. PMID:27429991

  6. Analysis of sample composition using resonant ionization and time-of-flight techniques

    International Nuclear Information System (INIS)

    Cruz, A. de la; Ortiz, M.; Campos, J.

    1995-01-01

    This paper describes the setting up of a linear time-of-flight mass spectrometer that uses a tunable laser to produce resonant ionization of atoms and molecules in a pulsed supersonic beam. The ability of this kind of systems to produce time resolved signals for each species present in the sample allows quantitative analysis of its composition. By using a tunable laser beam of high spectral resolution to produce ionization, studies based on the structure of the photoionization spectra obtained are possible. In the present work several isotopic species of ordinary and deuterated benzene have been studied. Special care has been dedicated to the influence of the presence of a 13C in the ring. In this way values for spectroscopic constants and isotopic shifts have been obtained. Another system based in a homemade proportional counter has been designed and used is an auxiliary system. The results obtained with it are independent of these mentioned above and compatible with them. This system is of great utility for laser wavelength tuning to produce ionization in the mass spectrometer. (Author) 98 refs

  7. Analysis of sample composition using resonant ionization and time-of-flight techniques

    International Nuclear Information System (INIS)

    Luz, A. de la; Ortiz, M.; Campos, J.

    1995-01-01

    This paper describes the setting up of a linear time-of-flight mass spectrometer that uses a tunable laser to produce resonant ionization of atoms and molecules in a pulsed supersonic beam. The ability of this kind of systems to produce time resolved signals for each species present in the samples allows quantitative analysis of its composition. By using a tunable laser beam of high spectral resolution to produce ionization, studies based on the structure of the photoionization spectra obtained are possible. In the present work several isotopic species of ordinary and deuterated benzene have been studies. special care has been dedicated to the influence of the presence of a ''13 C in the ring. In this way values for spectroscopic constants and isotopic shifts have been obtained. Another system based in a homemade proportional counter has been designed and used as an auxiliary system. The results obtained with it are independent of these mentioned above and compatible with them. This system is of great utility for laser wavelength tuning to produce ionization in the mass spectrometer

  8. Time-of-flight mass spectrometry of laser exploding foil initiated PETN samples

    Science.gov (United States)

    Fajardo, Mario E.; Molek, Christopher D.; Fossum, Emily C.

    2017-01-01

    We report the results of time-of-flight mass spectrometry (TOFMS) measurements of the gaseous products of thin-film pentaerythritol tetranitrate [PETN, C(CH2NO3)4] samples reacting in vacuo. The PETN sample spots are produced by masked physical vapor deposition [A.S. Tappan, et al., AIP Conf. Proc. 1426, 677 (2012)] onto a first-surface aluminum mirror. A pulsed laser beam imaged through the soda lime glass mirror substrate converts the aluminum layer into a high-temperature high-pressure plasma which initiates chemical reactions in the overlying PETN sample. We had previously proposed [E.C. Fossum, et al., AIP Conf. Proc. 1426, 235 (2012)] to exploit differences in gaseous product chemical identities and molecular velocities to provide a chemically-based diagnostic for distinguishing between "detonation-like" and deflagration responses. Briefly: we expect in-vacuum detonations to produce hyperthermal (v˜10 km/s) thermodynamically-stable products such as N2, CO2, and H2O, and for deflagrations to produce mostly reaction intermediates, such as NO and NO2, with much slower molecular velocities - consistent with the expansion-quenched thermal decomposition of PETN. We observe primarily slow reaction intermediates (NO2, CH2NO3) at low laser pulse energies, the appearance of NO at intermediate laser pulse energies, and the appearance of hyperthemal CO/N2 at mass 28 amu at the highest laser pulse energies. However, these results are somewhat ambiguous, as the NO, NO2, and CH2NO3 intermediates persist and all species become hyperthermal at the higher laser pulse energies. Also, the purported CO/N2 signal at 28 amu may be contaminated by silicon ablated from the glass mirror substrate. We plan to mitigate these problems in future experiments by adopting the "Buelow" sample configuration which employs an intermediate foil barrier to shield the energetic material from the laser and the laser driven plasma [S.J. Buelow, et al., AIP Conf. Proc. 706, 1377 (2003)].

  9. Documentation of particle-size analyzer time series, and discrete suspended-sediment and bed-sediment sample data collection, Niobrara River near Spencer, Nebraska, October 2014

    Science.gov (United States)

    Schaepe, Nathaniel J.; Coleman, Anthony M.; Zelt, Ronald B.

    2018-04-06

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Army Corps of Engineers, monitored a sediment release by Nebraska Public Power District from Spencer Dam located on the Niobrara River near Spencer, Nebraska, during the fall of 2014. The accumulated sediment behind Spencer Dam ordinarily is released semiannually; however, the spring 2014 release was postponed until the fall. Because of the postponement, the scheduled fall sediment release would consist of a larger volume of sediment. The larger than normal sediment release expected in fall 2014 provided an opportunity for the USGS and U.S. Army Corps of Engineers to improve the understanding of sediment transport during reservoir sediment releases. A primary objective was to collect continuous suspended-sediment data during the first days of the sediment release to document rapid changes in sediment concentrations. For this purpose, the USGS installed a laser-diffraction particle-size analyzer at a site near the outflow of the dam to collect continuous suspended-sediment data. The laser-diffraction particle-size analyzer measured volumetric particle concentration and particle-size distribution from October 1 to 2 (pre-sediment release) and October 5 to 9 (during sediment release). Additionally, the USGS manually collected discrete suspended-sediment and bed-sediment samples before, during, and after the sediment release. Samples were collected at two sites upstream from Spencer Dam and at three bridges downstream from Spencer Dam. The resulting datasets and basic metadata associated with the datasets were published as a data release; this report provides additional documentation about the data collection methods and the quality of the data.

  10. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2011-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  11. Signal sampling circuit

    NARCIS (Netherlands)

    Louwsma, S.M.; Vertregt, Maarten

    2010-01-01

    A sampling circuit for sampling a signal is disclosed. The sampling circuit comprises a plurality of sampling channels adapted to sample the signal in time-multiplexed fashion, each sampling channel comprising a respective track-and-hold circuit connected to a respective analogue to digital

  12. Behavior and Body Patterns of the Larger Pacific Striped Octopus.

    Science.gov (United States)

    Caldwell, Roy L; Ross, Richard; Rodaniche, Arcadio; Huffard, Christine L

    2015-01-01

    Over thirty years ago anecdotal accounts of the undescribed Larger Pacific Striped Octopus suggested behaviors previously unknown for octopuses. Beak-to-beak mating, dens shared by mating pairs, inking during mating and extended spawning were mentioned in publications, and enticed generations of cephalopod biologists. In 2012-2014 we were able to obtain several live specimens of this species, which remains without a formal description. All of the unique behaviors listed above were observed for animals in aquaria and are discussed here. We describe the behavior, body color patterns, and postures of 24 adults maintained in captivity. Chromatophore patterns of hatchlings are also shown.

  13. Framing the Discussion: Elections as Components of Larger Political and Cultural Geographies

    Science.gov (United States)

    Knopp, Larry

    2016-01-01

    It is important to remember that elections are but one piece--albeit an important one--of much larger processes of politics and governance. Moreover, in the United States they are increasingly implicated in the construction of identities and places. What goes on in the course of electoral politics (creating electoral systems and voting districts,…

  14. Future time perspective and positive health practices in young adults: an extension.

    Science.gov (United States)

    Mahon, N E; Yarcheski, T J; Yarcheski, A

    1997-06-01

    A sample of 69 young adults attending a public university responded to the Future Time Perspective Inventory, two subscales of the Time Experience Scales (Fast and Slow Tempo), and the Personal Lifestyle Questionnaire in classroom settings. A statistically significant correlation (.52) was found between scores for future time perspective and the ratings for the practice of positive health behaviors in young adults. This correlation was larger than those previously found for middle and late adolescents. Scores on subscales of individual health practices and future time perspective indicated statistically significant correlations for five (.25 to .56) of the six subscales. Scores on neither Fast nor Slow Tempo were related to ratings of positive health practices or ratings on subscales measuring positive health practices.

  15. Preanalytic Factors Associated With Hemolysis in Emergency Department Blood Samples.

    Science.gov (United States)

    Phelan, Michael P; Reineks, Edmunds Z; Schold, Jesse D; Hustey, Frederic M; Chamberlin, Janelle; Procop, Gary W

    2018-02-01

    - Hemolysis of emergency department blood samples is a common occurrence and has a negative impact on health care delivery. - To determine the effect of preanalytic factors (straight stick, intravenous [IV] line, needle gauge, location of blood draw, syringe versus vacuum tube use, tourniquet time) on hemolysis in emergency department blood samples. - A single 65 000-visit emergency department's electronic health record was queried for emergency department potassium results and blood draw technique for all samples obtained in calendar year 2014, resulting in 54 531 potassium results. Hemolyzed potassium was measured by hemolysis index. Comparisons of hemolysis by sampling technique were conducted by χ 2 tests. - Overall hemolysis was 10.0% (5439 of 54 531). Hemolysis among samples obtained from straight stick was significantly less than among those obtained with IV line (5.4% [33 of 615] versus 10.2% [4821 of 47 266], P < .001). For IV-placed blood draws, antecubital location had a statistically significant lower overall hemolysis compared with other locations: 7.4% (2117 of 28 786) versus 14.6% (2622 of 17 960) ( P < .001). For blood drawn with a syringe compared with vacuum, hemolysis was 13.0% (92 of 705) and 11.0% (1820 of 16 590), respectively ( P = .09, not significant). For large-gauge IV blood draws versus smaller-gauge IV lines, a lower hemolysis was also observed (9.3% [3882 of 41 571] versus 16.7% [939 of 5633]) ( P < .001). For IV-drawn blood with tourniquet time less than 60 seconds, hemolysis was 10.3% (1362 of 13 162) versus 13.9% for more than 60 seconds (532 of 3832), P < .001. - This study confirmed previous findings that straight stick and antecubital location are significantly associated with reduced hemolysis and indicated that shorter tourniquet time and larger gauge for IV draws were significantly associated with lower hemolysis.

  16. Larger eggs in resident brown trout living in sympatry with anadromous brown trout

    DEFF Research Database (Denmark)

    Olofsson, H.; Mosegaard, Henrik

    1999-01-01

    Freshwater resident brown trout (Salmo trutta L.) in the stream Jorlandaan (southwestern Sweden) had larger eggs (range of actual mean egg wet weights, 65.9-108.5 mg) than both sympatric migratory trout (76.8-84.2 mg) and trout from five other Swedish streams with allopatric resident (23.7-80.1 mg......) or migratory populations (44.5-121.9 mg), after accounting for differences in body size. In Jorlandaan, some resident females even had a larger absolute mean egg weight than any of the migratory females found in the stream Resident trout had low absolute fecundity, and our data suggest that resident females...... in Jorlandan produce large eggs at the expense of their fecundity The extremely large relative egg size in resident Jorlandaan females suggests that the production of large offspring enhances fitness, possibly through increased fry survival....

  17. Clinical evaluation of a Mucorales-specific real-time PCR assay in tissue and serum samples.

    Science.gov (United States)

    Springer, Jan; Lackner, Michaela; Ensinger, Christian; Risslegger, Brigitte; Morton, Charles Oliver; Nachbaur, David; Lass-Flörl, Cornelia; Einsele, Hermann; Heinz, Werner J; Loeffler, Juergen

    2016-12-01

    Molecular diagnostic assays can accelerate the diagnosis of fungal infections and subsequently improve patient outcomes. In particular, the detection of infections due to Mucorales is still challenging for laboratories and physicians. The aim of this study was to evaluate a probe-based Mucorales-specific real-time PCR assay (Muc18S) using tissue and serum samples from patients suffering from invasive mucormycosis (IMM). This assay can detect a broad range of clinically relevant Mucorales species and can be used to complement existing diagnostic tests or to screen high-risk patients. An advantage of the Muc18S assay is that it exclusively detects Mucorales species allowing the diagnosis of Mucorales DNA without sequencing within a few hours. In paraffin-embedded tissue samples this PCR-based method allowed rapid identification of Mucorales in comparison with standard methods and showed 91 % sensitivity in the IMM tissue samples. We also evaluated serum samples, an easily accessible material, from patients at risk from IMM. Mucorales DNA was detected in all patients with probable/proven IMM (100 %) and in 29 % of the possible cases. Detection of IMM in serum could enable an earlier diagnosis (up to 21 days) than current methods including tissue samples, which were gained mainly post-mortem. A screening strategy for high-risk patients, which would enable targeted treatment to improve patient outcomes, is therefore possible.

  18. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    Science.gov (United States)

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  19. Laser desorption/ionization time-of-flight mass spectrometry of triacylglycerols and other components in fingermark samples.

    Science.gov (United States)

    Emerson, Beth; Gidden, Jennifer; Lay, Jackson O; Durham, Bill

    2011-03-01

    The chemical composition of fingermarks could potentially be important for determining investigative leads, placing individuals at the time of a crime, and has applications as biomarkers of disease. Fingermark samples containing triacylglycerols (TAGs) and other components were analyzed using laser desorption/ionization (LDI) time-of-flight mass spectrometry (TOF MS). Only LDI appeared to be useful for this application while conventional matrix-assisted LDI-TOF MS was not. Tandem MS was used to identify/confirm selected TAGs. A limited gender comparison, based on a simple t-distribution and peaks intensities, indicated that two TAGs showed gender specificity at the 95% confidence level and two others at 97.5% confidence. Because gender-related TAGs differences were most often close to the standard deviation of the measurements, the majority of the TAGs showed no gender specificity. Thus, LDI-TOF MS is not a reliable indicator of gender based on fingermark analysis. Cosmetic ingredients present in some samples were identified. © 2011 American Academy of Forensic Sciences.

  20. Variability and reliability of POP concentrations in multiple breast milk samples collected from the same mothers.

    Science.gov (United States)

    Kakimoto, Risa; Ichiba, Masayoshi; Matsumoto, Akiko; Nakai, Kunihiko; Tatsuta, Nozomi; Iwai-Shimada, Miyuki; Ishiyama, Momoko; Ryuda, Noriko; Someya, Takashi; Tokumoto, Ieyasu; Ueno, Daisuke

    2018-01-13

    Risk assessment of infant using a realistic persistent organic pollutant (POP) exposure through breast milk is essential to devise future regulation of POPs. However, recent investigations have demonstrated that POP levels in breast milk collected from the same mother showed a wide range of variation daily and monthly. To estimate the appropriate sample size of breast milk from the same mother to obtain reliable POP concentrations, breast milk samples were collected from five mothers living in Japan from 2006 to 2012. Milk samples from each mother were collected 3 to 6 times a day through 3 to 7 days consecutively. Food samples as the duplicated method were collected from two mothers during the period of breast milk sample collection. Those were employed for POP (PCBs, DDTs, chlordanes, and HCB) analysis. PCB concentrations detected in breast milk samples showed a wide range of variation which was maximum 63 and 60% of relative standard deviation (RSD) in lipid and wet weight basis, respectively. The time course trend of those variations among the mothers did not show any typical pattern. A larger amount of PCB intake through food seemed to affect 10 h after those concentrations in breast milk in lipid weight basis. Intraclass correlation coefficient (ICC) analyses indicated that the appropriate sample size for good reproducibility of POP concentrations in breast milk required at least two samples for lipid and wet weight basis.

  1. Response of soil aggregate stability to storage time of soil samples

    International Nuclear Information System (INIS)

    Gerzabek, M.H.; Roessner, H.

    1993-04-01

    The aim of the present study was to investigate the well known phenomenon of changing aggregate stability values as result of soil sample storage. In order to evaluate the impact of soil microbial activity, the soil sample was split into three subsamples. Two samples were sterilized by means of chloroform fumigation and gamma irradiation, respectively. However, the aggregate stability measurements at three different dates were not correlated with the microbial activity (dehydrogenase activity). The moisture content of the aggregate samples seems to be of higher significance. Samples with lower moisture content (range: 0.4 to 1.9%) exhibited higher aggregate stabilities. Thus, airdried aggregate samples without further treatment don't seem to be suitable for standardized stability measurements. (authors)

  2. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    Science.gov (United States)

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  3. Stability and reliability of glycated haemoglobin measurements in blood samples stored at -20°C.

    Science.gov (United States)

    Venkataraman, Vijayachandrika; Anjana, Ranjit Mohan; Pradeepa, Rajendra; Deepa, Mohan; Jayashri, Ramamoorthy; Anbalagan, Viknesh Prabu; Akila, Bridgitte; Madhu, Sri Venkata; Lakshmy, Ramakrishnan; Mohan, Viswanathan

    2016-01-01

    To validate the stability of glycated haemoglobin (HbA1c) measurements in blood samples stored at -20°C for up to one month. The study group comprised 142 type 2 diabetic subjects visiting a tertiary centre for diabetes at Chennai city in south India. The HbA1c assay was done on a fasting blood sample using the Bio-Rad Variant machine on Day 0 (day of blood sample collection). Several aliquots were stored at -20°C and the assay was repeated on the 3rd, 7th, 15th, and 30th day after the sample collection. Bland-Altman plots were constructed and variation in the HbA1c levels on the different days was compared with the day 0 level. The median differences between HbA1c levels measured on Day 0 and the 3rd, 7th, 15th, and 30th day after blood collection were 0.0%, 0.2%, 0.3% and 0.5% respectively. Bland-Altman plot analysis showed that the differences between the day '0' and the different time points tend to get larger with time, but these were not clinically significant. HbA1c levels are relatively stable up to 2weeks, if blood samples are stored at -20°C. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    Science.gov (United States)

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  5. Historical Carbon Dioxide Emissions Caused by Land-Use Changes are Possibly Larger than Assumed

    Science.gov (United States)

    Arneth, A.; Sitch, S.; Pongratz, J.; Stocker, B. D.; Ciais, P.; Poulter, B.; Bayer, A. D.; Bondeau, A.; Calle, L.; Chini, L. P.; hide

    2017-01-01

    The terrestrial biosphere absorbs about 20% of fossil-fuel CO2 emissions. The overall magnitude of this sink is constrained by the difference between emissions, the rate of increase in atmospheric CO2 concentrations, and the ocean sink. However, the land sink is actually composed of two largely counteracting fluxes that are poorly quantified: fluxes from land-use change andCO2 uptake by terrestrial ecosystems. Dynamic global vegetation model simulations suggest that CO2 emissions from land-use change have been substantially underestimated because processes such as tree harvesting and land clearing from shifting cultivation have not been considered. As the overall terrestrial sink is constrained, a larger net flux as a result of land-use change implies that terrestrial uptake of CO2 is also larger, and that terrestrial ecosystems might have greater potential to sequester carbon in the future. Consequently, reforestation projects and efforts to avoid further deforestation could represent important mitigation pathways, with co-benefits for biodiversity. It is unclear whether a larger land carbon sink can be reconciled with our current understanding of terrestrial carbon cycling. Our possible underestimation of the historical residual terrestrial carbon sink adds further uncertainty to our capacity to predict the future of terrestrial carbon uptake and losses.

  6. Sampling strategies to measure the prevalence of common recurrent infections in longitudinal studies

    Directory of Open Access Journals (Sweden)

    Luby Stephen P

    2010-08-01

    Full Text Available Abstract Background Measuring recurrent infections such as diarrhoea or respiratory infections in epidemiological studies is a methodological challenge. Problems in measuring the incidence of recurrent infections include the episode definition, recall error, and the logistics of close follow up. Longitudinal prevalence (LP, the proportion-of-time-ill estimated by repeated prevalence measurements, is an alternative measure to incidence of recurrent infections. In contrast to incidence which usually requires continuous sampling, LP can be measured at intervals. This study explored how many more participants are needed for infrequent sampling to achieve the same study power as frequent sampling. Methods We developed a set of four empirical simulation models representing low and high risk settings with short or long episode durations. The model was used to evaluate different sampling strategies with different assumptions on recall period and recall error. Results The model identified three major factors that influence sampling strategies: (1 the clustering of episodes in individuals; (2 the duration of episodes; (3 the positive correlation between an individual's disease incidence and episode duration. Intermittent sampling (e.g. 12 times per year often requires only a slightly larger sample size compared to continuous sampling, especially in cluster-randomized trials. The collection of period prevalence data can lead to highly biased effect estimates if the exposure variable is associated with episode duration. To maximize study power, recall periods of 3 to 7 days may be preferable over shorter periods, even if this leads to inaccuracy in the prevalence estimates. Conclusion Choosing the optimal approach to measure recurrent infections in epidemiological studies depends on the setting, the study objectives, study design and budget constraints. Sampling at intervals can contribute to making epidemiological studies and trials more efficient, valid

  7. Groups have a larger cognitive capacity than individuals.

    Science.gov (United States)

    Sasaki, Takao; Pratt, Stephen C

    2012-10-09

    Increasing the number of options can paradoxically lead to worse decisions, a phenomenon known as cognitive overload [1]. This happens when an individual decision-maker attempts to digest information exceeding its processing capacity. Highly integrated groups, such as social insect colonies, make consensus decisions that combine the efforts of many members, suggesting that these groups can overcome individual limitations [2-4]. Here we report that an ant colony choosing a new nest site is less vulnerable to cognitive overload than an isolated ant making this decision on her own. We traced this improvement to differences in individual behavior. In whole colonies, each ant assesses only a small subset of available sites, and the colony combines their efforts to thoroughly explore all options. An isolated ant, on the other hand, must personally assess a larger number of sites to approach the same level of option coverage. By sharing the burden of assessment, the colony avoids overtaxing the abilities of its members. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Dust captures effectiveness of scrubber systems on mechanical miners operating in larger roadways.

    CSIR Research Space (South Africa)

    Hole, BJ

    1998-03-01

    Full Text Available The project was directed towards bord and pillar working by mechanised miners operating in larger section roadways, where the problem of scrubber capture tends to be greatest owing to the limited size of the zone of influence around exhaust...

  9. Impact of temperature and time storage on the microbial detection of oral samples by Checkerboard DNA-DNA hybridization method.

    Science.gov (United States)

    do Nascimento, Cássio; dos Santos, Janine Navarro; Pedrazzi, Vinícius; Pita, Murillo Sucena; Monesi, Nadia; Ribeiro, Ricardo Faria; de Albuquerque, Rubens Ferreira

    2014-01-01

    Molecular diagnosis methods have been largely used in epidemiological or clinical studies to detect and quantify microbial species that may colonize the oral cavity in healthy or disease. The preservation of genetic material from samples remains the major challenge to ensure the feasibility of these methodologies. Long-term storage may compromise the final result. The aim of this study was to evaluate the effect of temperature and time storage on the microbial detection of oral samples by Checkerboard DNA-DNA hybridization. Saliva and supragingival biofilm were taken from 10 healthy subjects, aliquoted (n=364) and processed according to proposed protocols: immediate processing and processed after 2 or 4 weeks, and 6 or 12 months of storage at 4°C, -20°C and -80°C. Either total or individual microbial counts were recorded in lower values for samples processed after 12 months of storage, irrespective of temperatures tested. Samples stored up to 6 months at cold temperatures showed similar counts to those immediately processed. The microbial incidence was also significantly reduced in samples stored during 12 months in all temperatures. Temperature and time of oral samples storage have relevant impact in the detection and quantification of bacterial and fungal species by Checkerboard DNA-DNA hybridization method. Samples should be processed immediately after collection or up to 6 months if conserved at cold temperatures to avoid false-negative results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Do bigger bats need more time to forage?

    Directory of Open Access Journals (Sweden)

    CEL. Esbérard

    Full Text Available We test the hypothesis is that bats using the same area and at the same time would be using similar preys, but they would have different foraging times due to specific differences in biomass. A total of 730 captures was analyzed 13 species of Vespertilionidae and Molossidae bats netted over a small dam in southeastern Brazil from 1993 and 1999. The relationship between the average time of captures and the biomass of the species of Vespertilinidae and Molossidae most frequent (captures > 4 was positive and significant (r = 0.83, p = 0.022, N = 7. Two lines are discussed to answer the longer foraging time for bigger bats: 1 larger insectivorous bats don't consume proportionally larger preys and 2 larger insects are less available.

  11. Determination of the optional time for taking blood samples by single intravenous injection of 3H-leucine

    International Nuclear Information System (INIS)

    Meng Delian; Yao Junhu; Lu Jinyin; Wu Xiaobin; Liu Jun

    2003-01-01

    Twenty four young hens (1.5 kg of body weight, BW) were randomly divided into 4 groups. Every group was diet free (FAS) or force-fed a nitrogen-free diet (NFD) or the diet with 20% crude protein in which soybean meal or cotton seed meal was the sole nitrogen source (30 g DM/kg BW). 30 μCi 3 H-Leu/kg BW was intravenously injected into all birds just after force-fed or on fasting. Venous blood samples were taken at 5, 30 min, 4,24,36 and 48h after injection. The excreta during the whole period of 48h after injection was collected. Special radioactivities of nonprotein plasma at every time point and excreta were measured. The optional time of taking blood samples was 20-24 hours after injected 3 H-Leu

  12. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    Science.gov (United States)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  13. Plasma phenylalanine and tyrosine responses to different nutritional conditions (fasting/postprandial) in patients with phenylketonuria: effect of sample timing.

    Science.gov (United States)

    van Spronsen, F J; van Rijn, M; van Dijk, T; Smit, G P; Reijngoud, D J; Berger, R; Heymans, H S

    1993-10-01

    To evaluate the adequacy of dietary treatment in patients with phenylketonuria, the monitoring of plasma phenylalanine and tyrosine concentrations is of great importance. The preferable time of blood sampling in relation to the nutritional condition during the day, however, is not known. It was the aim of this study to define guidelines for the timing of blood sampling with a minimal burden for the patient. Plasma concentrations of phenylalanine and tyrosine were measured in nine patients with phenylketonuria who had no clinical evidence of tyrosine deficiency. These values were measured during the day both after a prolonged overnight fast, and before and after breakfast. Phenylalanine showed a small rise during prolonged fasting, while tyrosine decreased slightly. After an individually tailored breakfast, phenylalanine remained stable, while tyrosine showed large fluctuations. It is concluded that the patient's nutritional condition (fasting/postprandial) is not important in the evaluation of the phenylalanine intake. To detect a possible tyrosine deficiency, however, a single blood sample is not sufficient and a combination of a preprandial and postprandial blood sample on the same day is advocated.

  14. The net effect of alternative allocation ratios on recruitment time and trial cost.

    Science.gov (United States)

    Vozdolska, Ralitza; Sano, Mary; Aisen, Paul; Edland, Steven D

    2009-04-01

    Increasing the proportion of subjects allocated to the experimental treatment in controlled clinical trials is often advocated as a method of increasing recruitment rates and improving the performance of trials. The presumption is that the higher likelihood of randomization to the experimental treatment will be perceived by potential study enrollees as an added benefit of participation and will increase recruitment rates and speed the completion of trials. However, studies with alternative allocation ratios require a larger sample size to maintain statistical power, which may result in a net increase in time required to complete recruitment and a net increase in total trial cost. To describe the potential net effect of alternative allocation ratios on recruitment time and trial cost. Models of recruitment time and trial cost were developed and used to compare trials with 1:1 allocation to trials with alternative allocation ratios under a range of per subject costs, per day costs, and enrollment rates. In regard to time required to complete recruitment, alternative allocation ratios are net beneficial if the recruitment rate improves by more than about 4% for trials with a 1.5:1 allocation ratio and 12% for trials with a 2:1 allocation ratio. More substantial improvements in recruitment rate, 13 and 47% respectively for scenarios we considered, are required for alternative allocation to be net beneficial in terms of tangible monetary cost. The cost models were developed expressly for trials comparing proportions or means across treatment groups. Using alternative allocation ratio designs to improve recruitment may or may not be time and cost-effective. Using alternative allocation for this purpose should only be considered for trial contexts where there is both clear evidence that the alternative design does improve recruitment rates and the attained time or cost efficiency justifies the added study subject burden implied by a larger sample size.

  15. Behavior and Body Patterns of the Larger Pacific Striped Octopus.

    Directory of Open Access Journals (Sweden)

    Roy L Caldwell

    Full Text Available Over thirty years ago anecdotal accounts of the undescribed Larger Pacific Striped Octopus suggested behaviors previously unknown for octopuses. Beak-to-beak mating, dens shared by mating pairs, inking during mating and extended spawning were mentioned in publications, and enticed generations of cephalopod biologists. In 2012-2014 we were able to obtain several live specimens of this species, which remains without a formal description. All of the unique behaviors listed above were observed for animals in aquaria and are discussed here. We describe the behavior, body color patterns, and postures of 24 adults maintained in captivity. Chromatophore patterns of hatchlings are also shown.

  16. Determination of the Isotope Ratio for Metal Samples Using a Laser Ablation/Ionization Time-of-flight Mass Spectrometry

    International Nuclear Information System (INIS)

    Song, Kyu Seok; Cha, Hyung Ki; Kim, Duk Hyeon; Min, Ki Hyun

    2004-01-01

    The laser ablation/ionization time-of-flight mass spectrometry is applied to the isotopic analysis of solid samples using a home-made instrument. The technique is convenient for solid sample analysis due to the onestep process of vaporization and ionization of the samples. The analyzed samples were lead, cadmium, molybdenum, and ytterbium. To optimize the analytical conditions of the technique, several parameters, such as laser energy, laser wavelength, size of the laser beam on the samples surface, and high voltages applied on the ion source electrodes were varied. Low energy of laser light was necessary to obtain the optimal mass resolution of spectra. The 532 nm light generated mass spectra with the higher signal-to-noise ratio compared with the 355 nm light. The best mass resolution obtained in the present study is ∼1,500 for the ytterbium

  17. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  18. Selandian-Thanetian larger foraminifera from the lower Jafnayn Formation in the Sayq area (eastern Oman Mountains)

    Energy Technology Data Exchange (ETDEWEB)

    Serra-Kiel, J.; Vicedo, V.; Razin, P.; Grelaud, C.

    2016-07-01

    The larger foraminifera of the lower part of the Jafnayn Formation outcropping in the Wadi Sayq, in the Paleocene series of the eastern Oman Mountains, have been studied and described in detail. The analysis have allowed us to develop a detailed systematic description of each taxa, constraining their biostratigraphic distribution and defining the associated foraminifera assemblages. The taxonomic study has permitted us to identify each morphotype precisely and describe three new taxa, namely, Ercumentina sayqensis n. gen. n. sp. Lacazinella rogeri n. sp. and Globoreticulinidae new family. The first assemblage is characterized by the presence of Coskinon sp., Dictyoconus cf. turriculus Hottinger and Drobne, Anatoliella ozalpiensis Sirel, Ercumentina sayqensis n. gen. n. sp. SerraKiel and Vicedo, Lacazinella rogeri n. sp. Serra-Kiel and Vicedo, Mandanella cf. flabelliformis Rahaghi, Azzarolina daviesi (Henson), Lockhartia retiata Sander, Dictyokathina simplex Smout and Miscellanites globularis (Rahaghi). The second assemblage is constituted by the forms Pseudofallotella persica (Hottinger and Drobne), Dictyoconus cf. turriculus Hottinger and Drobne, Lacazinella rogeri n. sp. Serra-Kiel and Vicedo, Azzarolina daviesi (Henson), Keramosphera? cf. iranica Rahaghi, Lockhartia haimei (Davies), Lockhartia retiata Sander, Sakesaria trichilata Sander, Kathina delseota Smout, Elazigina harabekayisensis Sirel, Daviesina khatiyahi Smout, and Miscellanea juliettae Leppig. The first assemblage can be considered to belong to the Shallow Bentic Zone SBZ2 (early Selandian age), and the second assemblage to the SBZ3 (late Selandian-early Thanetian age).This paper shows, for the first time in the Middle East area, a correlation between the Selandian larger foraminifera and planktonic foraminifera biozones. (Author)

  19. Selandian-Thanetian larger foraminifera from the lower Jafnayn Formation in the Sayq area (eastern Oman Mountains)

    International Nuclear Information System (INIS)

    Serra-Kiel, J.; Vicedo, V.; Razin, P.; Grelaud, C.

    2016-01-01

    The larger foraminifera of the lower part of the Jafnayn Formation outcropping in the Wadi Sayq, in the Paleocene series of the eastern Oman Mountains, have been studied and described in detail. The analysis have allowed us to develop a detailed systematic description of each taxa, constraining their biostratigraphic distribution and defining the associated foraminifera assemblages. The taxonomic study has permitted us to identify each morphotype precisely and describe three new taxa, namely, Ercumentina sayqensis n. gen. n. sp. Lacazinella rogeri n. sp. and Globoreticulinidae new family. The first assemblage is characterized by the presence of Coskinon sp., Dictyoconus cf. turriculus Hottinger and Drobne, Anatoliella ozalpiensis Sirel, Ercumentina sayqensis n. gen. n. sp. SerraKiel and Vicedo, Lacazinella rogeri n. sp. Serra-Kiel and Vicedo, Mandanella cf. flabelliformis Rahaghi, Azzarolina daviesi (Henson), Lockhartia retiata Sander, Dictyokathina simplex Smout and Miscellanites globularis (Rahaghi). The second assemblage is constituted by the forms Pseudofallotella persica (Hottinger and Drobne), Dictyoconus cf. turriculus Hottinger and Drobne, Lacazinella rogeri n. sp. Serra-Kiel and Vicedo, Azzarolina daviesi (Henson), Keramosphera? cf. iranica Rahaghi, Lockhartia haimei (Davies), Lockhartia retiata Sander, Sakesaria trichilata Sander, Kathina delseota Smout, Elazigina harabekayisensis Sirel, Daviesina khatiyahi Smout, and Miscellanea juliettae Leppig. The first assemblage can be considered to belong to the Shallow Bentic Zone SBZ2 (early Selandian age), and the second assemblage to the SBZ3 (late Selandian-early Thanetian age).This paper shows, for the first time in the Middle East area, a correlation between the Selandian larger foraminifera and planktonic foraminifera biozones. (Author)

  20. Highly efficient detection of paclobutrazol in environmental water and soil samples by time-resolved fluoroimmunoassay

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhenjiang, E-mail: lzj1984@ujs.edu.cn [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China); Wei, Xi [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China); The Affiliated First People' s Hospital of Jiangsu University, Zhenjiang 212002 (China); Ren, Kewei; Zhu, Gangbing; Zhang, Zhen; Wang, Jiagao; Du, Daolin [School of the Environment and Safety Engineering, Jiangsu University, Zhenjiang 212013 (China)

    2016-11-01

    A fast and ultrasensitive indirect competitive time-resolved fluoroimmunoassay (TRFIA) was developed for the analysis of paclobutrazol in environmental water and soil samples. Paclobutrazol hapten was synthesized and conjugated to bovine serum albumin (BSA) for producing polyclonal antibodies. Under optimal conditions, the 50% inhibitory concentration (IC{sub 50} value) and limit of detection (LOD, IC{sub 20} value) were 1.09 μg L{sup −} {sup 1} and 0.067 μg L{sup −} {sup 1}, respectively. The LOD of TRFIA was improved 30-fold compared to the already reported ELISA. There was almost no cross-reactivity of the antibody with the other structural analogues of triazole compounds, indicating that the antibody had high specificity. The average recoveries from spiked samples were in the range from 80.2% to 104.7% with a relative standard deviation of 1.0–9.5%. The TRFIA results for the real samples were in good agreement with that obtained by high-performance liquid chromatography analyses. The results indicate that the established TRFIA has potential application for screening paclobutrazol in environmental samples. - Highlights: • The approach to design and synthesize the PBZ hapten was more straightforward. • A rapid and ultrasensitive TRFIA was developed and applied to the screening of PBZ. • The TRFIA for real soil samples showed reliability and high correlation with HPLC. • The PBZ TRFIA showed high sensitivity, simple operation, a wide range of quantitative analyses and no radioactive hazards.

  1. Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.

    Science.gov (United States)

    Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2018-01-24

    Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.

  2. A Improved Seabed Surface Sand Sampling Device

    Science.gov (United States)

    Luo, X.

    2017-12-01

    In marine geology research it is necessary to obtain a suf fcient quantity of seabed surface samples, while also en- suring that the samples are in their original state. Currently,there are a number of seabed surface sampling devices available, but we fnd it is very diffcult to obtain sand samples using these devices, particularly when dealing with fne sand. Machine-controlled seabed surface sampling devices are also available, but generally unable to dive into deeper regions of water. To obtain larger quantities of seabed surface sand samples in their original states, many researchers have tried to improve upon sampling devices,but these efforts have generally produced ambiguous results, in our opinion.To resolve this issue, we have designed an improved andhighly effective seabed surface sand sampling device that incorporates the strengths of a variety of sampling devices. It is capable of diving into deepwater to obtain fne sand samples and is also suited for use in streams, rivers, lakes and seas with varying levels of depth (up to 100 m). This device can be used for geological mapping, underwater prospecting, geological engineering and ecological, environmental studies in both marine and terrestrial waters.

  3. Amplatzer angled duct occluder for closure of patent ductus arteriosus larger than the aorta in an infant.

    Science.gov (United States)

    Vijayalakshmi, I B; Chitra, N; Rajasri, R; Prabhudeva, A N

    2005-01-01

    Transcatheter closure of patent ductus arteriosus (PDA) by Amplatzer duct occluder is the treatment of choice. However, closure of very large ducts in infants with low weight is a challenge for the interventionalist because a large device may obstruct the aorta or left pulmonary artery. Difficulty is also encountered in advancing the device around the curve of the right ventricular outflow tract toward the pulmonary artery; this curve is tight, more or less at a right angle in infants, leading to kinking of the sheath, which increases fluoroscopic time. This is the first reported case of a very large PDA (8.7 mm), larger than the aorta (8.2 mm), successfully closed by an Amplatzer angled duct occluder in an infant weighing 5 kg.

  4. Detection of Salmonella spp. in veterinary samples by combining selective enrichment and real-time PCR.

    Science.gov (United States)

    Goodman, Laura B; McDonough, Patrick L; Anderson, Renee R; Franklin-Guild, Rebecca J; Ryan, James R; Perkins, Gillian A; Thachil, Anil J; Glaser, Amy L; Thompson, Belinda S

    2017-11-01

    Rapid screening for enteric bacterial pathogens in clinical environments is essential for biosecurity. Salmonella found in veterinary hospitals, particularly Salmonella enterica serovar Dublin, can pose unique challenges for culture and testing because of its poor growth. Multiple Salmonella serovars including Dublin are emerging threats to public health given increasing prevalence and antimicrobial resistance. We adapted an automated food testing method to veterinary samples and evaluated the performance of the method in a variety of matrices including environmental samples ( n = 81), tissues ( n = 52), feces ( n = 148), and feed ( n = 29). A commercial kit was chosen as the basis for this approach in view of extensive performance characterizations published by multiple independent organizations. A workflow was established for efficiently and accurately testing veterinary matrices and environmental samples by use of real-time PCR after selective enrichment in Rappaport-Vassiliadis soya (RVS) medium. Using this method, the detection limit for S. Dublin improved by 100-fold over subculture on selective agars (eosin-methylene blue, brilliant green, and xylose-lysine-deoxycholate). Overall, the procedure was effective in detecting Salmonella spp. and provided next-day results.

  5. Use of a holder-vacuum tube device to save on-site hands in preparing urine samples for head-space gas-chromatography, and its application to determine the time allowance for sample sealing.

    Science.gov (United States)

    Kawai, Toshio; Sumino, Kimiaki; Ohashi, Fumiko; Ikeda, Masayuki

    2011-01-01

    To facilitate urine sample preparation prior to head-space gas-chromatographic (HS-GC) analysis. Urine samples containing one of the five solvents (acetone, methanol, methyl ethyl ketone, methyl isobutyl ketone and toluene) at the levels of biological exposure limits were aspirated into a vacuum tube via holder, a device commercially available for venous blood collection (the vacuum tube method). The urine sample, 5 ml, was quantitatively transferred to a 20-ml head-space vial prior to HS-GC analysis. The loaded tubes were stored at +4 ℃ in dark for up to 3 d. The vacuum tube method facilitated on-site procedures of urine sample preparation for HS-GC with no significant loss of solvents in the sample and no need of skilled hands, whereas on-site sample preparation time was significantly reduced. Furthermore, no loss of solvents was detected during the 3-d storage, irrespective of hydrophilic (acetone) or lipophilic solvent (toluene). In a pilot application, high performance of the vacuum tube method in sealing a sample in an air-tight space succeeded to confirm that no solvent will be lost when sealing is completed within 5 min after urine voiding, and that the allowance time is as long as 30 min in case of toluene in urine. The use of the holder-vacuum tube device not only saves hands for transfer of the sample to air-tight space, but facilitates sample storage prior to HS-GC analysis.

  6. Juvenile exposure to predator cues induces a larger egg size in fish

    Science.gov (United States)

    Segers, Francisca H. I. D.; Taborsky, Barbara

    2012-01-01

    When females anticipate a hazardous environment for their offspring, they can increase offspring survival by producing larger young. Early environmental experience determines egg size in different animal taxa. We predicted that a higher perceived predation risk by juveniles would cause an increase in the sizes of eggs that they produce as adults. To test this, we exposed juveniles of the mouthbrooding cichlid Eretmodus cyanostictus in a split-brood experiment either to cues of a natural predator or to a control situation. After maturation, females that had been confronted with predators produced heavier eggs, whereas clutch size itself was not affected by the treatment. This effect cannot be explained by a differential female body size because the predator treatment did not influence growth trajectories. The observed increase of egg mass is likely to be adaptive, as heavier eggs gave rise to larger young and in fish, juvenile predation risk drops sharply with increasing body size. This study provides the first evidence that predator cues perceived by females early in life positively affect egg mass, suggesting that these cues allow her to predict the predation risk for her offspring. PMID:21976689

  7. The LASS [Larger Aperture Superconducting Solenoid] spectrometer

    International Nuclear Information System (INIS)

    Aston, D.; Awaji, N.; Barnett, B.

    1986-04-01

    LASS is the acronym for the Large Aperture Superconducting Solenoid spectrometer which is located in an rf-separated hadron beam at the Stanford Linear Accelerator Center. This spectrometer was constructed in order to perform high statistics studies of multiparticle final states produced in hadron reactions. Such reactions are frequently characterized by events having complicated topologies and/or relatively high particle multiplicity. Their detailed study requires a spectrometer which can provide good resolution in momentum and position over almost the entire solid angle subtended by the production point. In addition, good final state particle identification must be available so that separation of the many kinematically-overlapping final states can be achieved. Precise analyses of the individual reaction channels require high statistics, so that the spectrometer must be capable of high data-taking rates in order that such samples can be acquired in a reasonable running time. Finally, the spectrometer must be complemented by a sophisticated off-line analysis package which efficiently finds tracks, recognizes and fits event topologies and correctly associates the available particle identification information. This, together with complicated programs which perform specific analysis tasks such as partial wave analysis, requires a great deal of software effort allied to a very large computing capacity. This paper describes the construction and performance of the LASS spectrometer, which is an attempt to realize the features just discussed. The configuration of the spectrometer corresponds to the data-taking on K + and K - interactions in hydrogen at 11 GeV/c which took place in 1981 and 1982. This constitutes a major upgrade of the configuration used to acquire lower statistics data on 11 GeV/c K - p interactions during 1977 and 1978, which is also described briefly

  8. The LASS (Larger Aperture Superconducting Solenoid) spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Aston, D.; Awaji, N.; Barnett, B.; Bienz, T.; Bierce, R.; Bird, F.; Bird, L.; Blockus, D.; Carnegie, R.K.; Chien, C.Y.

    1986-04-01

    LASS is the acronym for the Large Aperture Superconducting Solenoid spectrometer which is located in an rf-separated hadron beam at the Stanford Linear Accelerator Center. This spectrometer was constructed in order to perform high statistics studies of multiparticle final states produced in hadron reactions. Such reactions are frequently characterized by events having complicated topologies and/or relatively high particle multiplicity. Their detailed study requires a spectrometer which can provide good resolution in momentum and position over almost the entire solid angle subtended by the production point. In addition, good final state particle identification must be available so that separation of the many kinematically-overlapping final states can be achieved. Precise analyses of the individual reaction channels require high statistics, so that the spectrometer must be capable of high data-taking rates in order that such samples can be acquired in a reasonable running time. Finally, the spectrometer must be complemented by a sophisticated off-line analysis package which efficiently finds tracks, recognizes and fits event topologies and correctly associates the available particle identification information. This, together with complicated programs which perform specific analysis tasks such as partial wave analysis, requires a great deal of software effort allied to a very large computing capacity. This paper describes the construction and performance of the LASS spectrometer, which is an attempt to realize the features just discussed. The configuration of the spectrometer corresponds to the data-taking on K and K interactions in hydrogen at 11 GeV/c which took place in 1981 and 1982. This constitutes a major upgrade of the configuration used to acquire lower statistics data on 11 GeV/c K p interactions during 1977 and 1978, which is also described briefly.

  9. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  10. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    Science.gov (United States)

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  11. Ant mosaics in Bornean primary rain forest high canopy depend on spatial scale, time of day, and sampling method

    Directory of Open Access Journals (Sweden)

    Kalsum M. Yusah

    2018-01-01

    Full Text Available Background Competitive interactions in biological communities can be thought of as giving rise to “assembly rules” that dictate the species that are able to co-exist. Ant communities in tropical canopies often display a particular pattern, an “ant mosaic”, in which competition between dominant ant species results in a patchwork of mutually exclusive territories. Although ant mosaics have been well-documented in plantation landscapes, their presence in pristine tropical forests remained contentious until recently. Here we assess presence of ant mosaics in a hitherto under-investigated forest stratum, the emergent trees of the high canopy in primary tropical rain forest, and explore how the strength of any ant mosaics is affected by spatial scale, time of day, and sampling method. Methods To test whether these factors might impact the detection of ant mosaics in pristine habitats, we sampled ant communities from emergent trees, which rise above the highest canopy layers in lowland dipterocarp rain forests in North Borneo (38.8–60.2 m, using both baiting and insecticide fogging. Critically, we restricted sampling to only the canopy of each focal tree. For baiting, we carried out sampling during both the day and the night. We used null models of species co-occurrence to assess patterns of segregation at within-tree and between-tree scales. Results The numerically dominant ant species on the emergent trees sampled formed a diverse community, with differences in the identity of dominant species between times of day and sampling methods. Between trees, we found patterns of ant species segregation consistent with the existence of ant mosaics using both methods. Within trees, fogged ants were segregated, while baited ants were segregated only at night. Discussion We conclude that ant mosaics are present within the emergent trees of the high canopy of tropical rain forest in Malaysian Borneo, and that sampling technique, spatial scale, and time

  12. Larger aftershocks happen farther away: nonseparability of magnitude and spatial distributions of aftershocks

    Science.gov (United States)

    Van Der Elst, Nicholas; Shaw, Bruce E.

    2015-01-01

    Aftershocks may be driven by stress concentrations left by the main shock rupture or by elastic stress transfer to adjacent fault sections or strands. Aftershocks that occur within the initial rupture may be limited in size, because the scale of the stress concentrations should be smaller than the primary rupture itself. On the other hand, aftershocks that occur on adjacent fault segments outside the primary rupture may have no such size limitation. Here we use high-precision double-difference relocated earthquake catalogs to demonstrate that larger aftershocks occur farther away than smaller aftershocks, when measured from the centroid of early aftershock activity—a proxy for the initial rupture. Aftershocks as large as or larger than the initiating event nucleate almost exclusively in the outer regions of the aftershock zone. This observation is interpreted as a signature of elastic rebound in the earthquake catalog and can be used to improve forecasting of large aftershocks.

  13. Corganiser: a web-based software tool for planning time-sensitive sampling of whole rounds during scientific drilling

    DEFF Research Database (Denmark)

    Marshall, Ian

    2014-01-01

    with a wide range of core and section configurations and can thus be used in future drilling projects. Corganiser is written in the Python programming language and is implemented both as a graphical web interface and command-line interface. It can be accessed online at http://130.226.247.137/.......Corganiser is a software tool developed to simplify the process of preparing whole-round sampling plans for time-sensitive microbiology and geochemistry sampling during scientific drilling. It was developed during the Integrated Ocean Drilling Program (IODP) Expedition 347, but is designed to work...

  14. Trace metals analysis of hair samples from students in metropolitan area high school

    International Nuclear Information System (INIS)

    Chiu, S.F.; Wang, P.C.; Kao, P.F.; Chung Shan Medical University, Taichung, Taiwan; Lin, J.B.; Chung Shan Medical University, Taichung, Taiwan; Lin, D.B.; Chen, C.Y.

    2011-01-01

    Hair samples from junior high school students in metropolitan areas of Taichung, Taiwan were tested for a total of 13 elements, Al, Ag, Br, Cl, Cr, Fe, K, La, Mn, Na, Sc, Se, and Zn by instrumental neutron activation analysis (INAA) to establish seasonal variations, gender and environmental exposures. The seasonal variations of hairs in 39 healthy students (18 males and 21 females; age 13.3 ± 0.4 years; height, 158.0 ± 4.1 cm; weight, 53.4 ± 5.7 kg) were collected at 1.5-month intervals for 1 year starting from late August, 2008. The concentrations of the above elements varied from 10 3 to 10 -2 μg g -1 at different sampling times. A quantified index of agreement (AT) was introduced to help classify the elements. A smaller AT indicated highly consistent quantities of specific metals in the hair while a larger AT indicated increased fluctuation, i.e., less agreement. The different ATs in various hair samples were discussed. The concentrations of these elements are compared with the data in the literature. (author)

  15. Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR

    DEFF Research Database (Denmark)

    Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio

    2014-01-01

    Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five...... pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs...... mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40...

  16. A Discrete-Time Chattering Free Sliding Mode Control with Multirate Sampling Method for Flight Simulator

    Directory of Open Access Journals (Sweden)

    Yunjie Wu

    2013-01-01

    Full Text Available In order to improve the tracking accuracy of flight simulator and expend its frequency response, a multirate-sampling-method-based discrete-time chattering free sliding mode control is developed and imported into the systems. By constructing the multirate sampling sliding mode controller, the flight simulator can perfectly track a given reference signal with an arbitrarily small dynamic tracking error, and the problems caused by a contradiction of reference signal period and control period in traditional design method can be eliminated. It is proved by theoretical analysis that the extremely high dynamic tracking precision can be obtained. Meanwhile, the robustness is guaranteed by sliding mode control even though there are modeling mismatch, external disturbances and measure noise. The validity of the proposed method is confirmed by experiments on flight simulator.

  17. Bacterial diversity and community structure in lettuce soil are shifted by cultivation time

    Science.gov (United States)

    Liu, Yiqian; Chang, Qing; Guo, Xu; Yi, Xinxin

    2017-08-01

    Compared with cereal production, vegetable production usually requires a greater degree of management and larger input of nutrients and irrigation, but these systems are not sustainable in the long term. This study aimed to what extent lettuce determine the bacterial community composition in the soil, during lettuce cultivation, pesticides and fertilizers were not apply to soil. Soil samples were collected from depths of 0-20cm and 20-40cm. A highthroughput sequencing approach was employed to investigate bacterial communities in lettuce-cultivated soil samples in a time-dependent manner. The dominant bacteria in the lettuce soil samples were mainly Proteobacteria, Actinobacteria, Chloroflexi, Nitrospirae, Firmicutes, Acidobacteria, Bacteroidetes, Verrucomicrobia, Planctomycetes, Gemmatimo nadetes, Cyanobacteria. Proteobacteria was the most abundant phylum in the 6 soil samples. The relative abundance of Acidobacteria, Firmicutes, Bacteroidetes, Verrucomicrobia and Cyanobacteria decreased through time of lettuce cultivation, but the relative abundance of Proteobacteria, Actinobacteria, Gemmatimonadetes, Chloroflexi, Planctomycetes and Nitrospirae increased over time. In the 0-20cm depth group and the 20-40cm depth soil, a similar pattern was observed that the percentage number of only shared OTUs between the early and late stage was lower than that between the early and middle stage soil, the result showed that lettuce growth can affect structure of soil bacterial communities.

  18. Use of spatially distributed time-integrated sediment sampling networks and distributed fine sediment modelling to inform catchment management.

    Science.gov (United States)

    Perks, M T; Warburton, J; Bracken, L J; Reaney, S M; Emery, S B; Hirst, S

    2017-11-01

    Under the EU Water Framework Directive, suspended sediment is omitted from environmental quality standards and compliance targets. This omission is partly explained by difficulties in assessing the complex dose-response of ecological communities. But equally, it is hindered by a lack of spatially distributed estimates of suspended sediment variability across catchments. In this paper, we demonstrate the inability of traditional, discrete sampling campaigns for assessing exposure to fine sediment. Sampling frequencies based on Environmental Quality Standard protocols, whilst reflecting typical manual sampling constraints, are unable to determine the magnitude of sediment exposure with an acceptable level of precision. Deviations from actual concentrations range between -35 and +20% based on the interquartile range of simulations. As an alternative, we assess the value of low-cost, suspended sediment sampling networks for quantifying suspended sediment transfer (SST). In this study of the 362 km 2 upland Esk catchment we observe that spatial patterns of sediment flux are consistent over the two year monitoring period across a network of 17 monitoring sites. This enables the key contributing sub-catchments of Butter Beck (SST: 1141 t km 2 yr -1 ) and Glaisdale Beck (SST: 841 t km 2 yr -1 ) to be identified. The time-integrated samplers offer a feasible alternative to traditional infrequent and discrete sampling approaches for assessing spatio-temporal changes in contamination. In conjunction with a spatially distributed diffuse pollution model (SCIMAP), time-integrated sediment sampling is an effective means of identifying critical sediment source areas in the catchment, which can better inform sediment management strategies for pollution prevention and control. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Analytical model for real time, noninvasive estimation of blood glucose level.

    Science.gov (United States)

    Adhyapak, Anoop; Sidley, Matthew; Venkataraman, Jayanti

    2014-01-01

    The paper presents an analytical model to estimate blood glucose level from measurements made non-invasively and in real time by an antenna strapped to a patient's wrist. Some promising success has been shown by the RIT ETA Lab research group that an antenna's resonant frequency can track, in real time, changes in glucose concentration. Based on an in-vitro study of blood samples of diabetic patients, the paper presents a modified Cole-Cole model that incorporates a factor to represent the change in glucose level. A calibration technique using the input impedance technique is discussed and the results show a good estimation as compared to the glucose meter readings. An alternate calibration methodology has been developed that is based on the shift in the antenna resonant frequency using an equivalent circuit model containing a shunt capacitor to represent the shift in resonant frequency with changing glucose levels. Work under progress is the optimization of the technique with a larger sample of patients.

  20. Appropriate xenon-inhalation time in xenon-enhanced CT using the end-tidal gas-sampling method

    Energy Technology Data Exchange (ETDEWEB)

    Asada, Hideo; Furuhata, Shigeru; Onozuka, Satoshi; Uchida, Koichi; Fujii, Koji; Suga, Sadao; Kawase, Takeshi; Toya, Shigeo; Shiga, Hayao

    1988-12-01

    For the end-tidal gas-sampling method of xenon-enhanced CT (Xe-CT), the respective functional images of K, lambda, and the regional cerebral blood flow (rCBF) were studied and compared using the data at 7-, 10-, 15- and 25-minute inhalations. The most appropriate inhalation time of xenon gas was evaluated in 14 clinical cases. An end-tidal xenon curve which represents the arterial xenon concentration was monitored with a xenon analyzer; the xenon concentration was gradually increased to a level of 50% by using a xenon inhalator with a closed circuit to prevent the overestimation of the xenon concentration sampled from the mask. Serial CT scans were taken over a period of 25 minutes of inhalation. The functional images of K, lambda, and rCBF were calculated for serial CT scans for 7, 10, 15 and 25 minutes using Fick's equation. Those various images and absolute values were then compared. The rCBF value of a 15-minute inhalation was approximately 15% greater than that of 25 minutes, while the values of K, lambda, rCBF from a 15-minute inhalation were significantly correlated to those from 25 minutes. The regression line made it possible to estimate 25-minute inhalation values from those of 15 minutes. In imaging, the rCBF mapping of the 15-minute inhalation was found to be more reliable than that of 25 minutes. This study suggests that the minimal time of xenon inhalation is 15 minutes for the end-tidal gas-sampling method. A longer inhalation may be necessary for the estimation of rCBF in the low-flow area, such as the white matter or the pathological region.

  1. Diurnal Differences in OLR Climatologies and Anomaly Time Series

    Science.gov (United States)

    Susskind, Joel; Lee, Jae N.; Iredell, Lena; Loeb, Norm

    2015-01-01

    AIRS (Atmospheric Infrared Sounder) Version-6 OLR (Outgoing Long-Wave Radiation) matches CERES (Clouds and the Earth's Radiant Energy System) Edition-2.8 OLR very closely on a 1x1 latitude x longitude scale, both with regard to absolute values, and also with regard to anomalies of OLR. There is a bias of 3.5 watts per meter squared, which is nearly constant both in time and space. Contiguous areas contain large positive or negative OLR difference between AIRS and CERES are where the day-night difference of OLR is large. For AIRS, the larger the diurnal cycle, the more likely that sampling twice a day is inadequate. Lower values of OLRclr (Clear Sky OLR) and LWCRF (Longwave Cloud Radiative Forcing) in AIRS compared to CERES is at least in part a result of AIRS sampling over cold and cloudy cases.

  2. Peer Effects on Obesity in a Sample of European Children

    DEFF Research Database (Denmark)

    Gwozdz, Wencke; Sousa-Poza, Alfonso; Reisch, Lucia A.

    2015-01-01

    This study analyzes peer effects on childhood obesity using data from the first two waves of the IDEFICS study, which applies several anthropometric and other measures of fatness to approximately 14,000 children aged two to nine participating in both waves in 16 regions of eight European countries....... Peers are defined as same-sex children in the same school and age group. The results show that peer effects do exist in this European sample but that they differ among both regions and different fatness measures. Peer effects are larger in Spain, Italy, and Cyprus – the more collectivist regions in our...... sample – while waist circumference generally gives rise to larger peer effects than BMI. We also provide evidence that parental misperceptions of their own children's weight goes hand in hand with fatter peer groups, supporting the notion that in making such assessments, parents compare their children...

  3. Elemental Abundances in the Broad Emission Line Region of Quasars at Redshifts larger than 4

    DEFF Research Database (Denmark)

    Dietrich, M.; Appenzeller, I.; Hamann, F.

    2003-01-01

    the chemical composition of the line emitting gas. Comparisons to photoionization calculations indicate gas metallicities in the broad emission line region in the range of solar to several times solar. The average of the mean metallicity of each high-z quasar in this sample is $Z/Z_\\odot = 4.3 \\pm 0...

  4. Quantitative extraction of nucleotides from frozen muscle samples of Atlantic salmon ( Salmo salar ) and rainbow trout ( Oncorhynchus mykiss ) : Effects of time taken to sample and extraction method

    DEFF Research Database (Denmark)

    Thomas, P.M.; Bremner, Allan; Pankhurst, N.W.

    2000-01-01

    time taken to sample, method 2 resulted in higher adenylate and lower IMP concentration than method I. These results indicate that method 2 is most effective in obtaining realistic nucleotide concentrations from fish muscle because it maintains the tissue temperature below the critical freeze zone, (-0...

  5. Apollo Lunar Sample Integration into Google Moon: A New Approach to Digitization

    Science.gov (United States)

    Dawson, Melissa D.; Todd, nancy S.; Lofgren, Gary E.

    2011-01-01

    The Google Moon Apollo Lunar Sample Data Integration project is part of a larger, LASER-funded 4-year lunar rock photo restoration project by NASA s Acquisition and Curation Office [1]. The objective of this project is to enhance the Apollo mission data already available on Google Moon with information about the lunar samples collected during the Apollo missions. To this end, we have combined rock sample data from various sources, including Curation databases, mission documentation and lunar sample catalogs, with newly available digital photography of rock samples to create a user-friendly, interactive tool for learning about the Apollo Moon samples

  6. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  7. Can groundwater sampling techniques used in monitoring wells influence methane concentrations and isotopes?

    Science.gov (United States)

    Rivard, Christine; Bordeleau, Geneviève; Lavoie, Denis; Lefebvre, René; Malet, Xavier

    2018-03-06

    Methane concentrations and isotopic composition in groundwater are the focus of a growing number of studies. However, concerns are often expressed regarding the integrity of samples, as methane is very volatile and may partially exsolve during sample lifting in the well and transfer to sampling containers. While issues concerning bottle-filling techniques have already been documented, this paper documents a comparison of methane concentration and isotopic composition obtained with three devices commonly used to retrieve water samples from dedicated observation wells. This work lies within the framework of a larger project carried out in the Saint-Édouard area (southern Québec, Canada), whose objective was to assess the risk to shallow groundwater quality related to potential shale gas exploitation. The selected sampling devices, which were tested on ten wells during three sampling campaigns, consist of an impeller pump, a bladder pump, and disposable sampling bags (HydraSleeve). The sampling bags were used both before and after pumping, to verify the appropriateness of a no-purge approach, compared to the low-flow approach involving pumping until stabilization of field physicochemical parameters. Results show that methane concentrations obtained with the selected sampling techniques are usually similar and that there is no systematic bias related to a specific technique. Nonetheless, concentrations can sometimes vary quite significantly (up to 3.5 times) for a given well and sampling event. Methane isotopic composition obtained with all sampling techniques is very similar, except in some cases where sampling bags were used before pumping (no-purge approach), in wells where multiple groundwater sources enter the borehole.

  8. Determination of drugs and drug-like compounds in different samples with direct analysis in real time mass spectrometry.

    Science.gov (United States)

    Chernetsova, Elena S; Morlock, Gertrud E

    2011-01-01

    Direct analysis in real time (DART), a relatively new ionization source for mass spectrometry, ionizes small-molecule components from different kinds of samples without any sample preparation and chromatographic separation. The current paper reviews the published data available on the determination of drugs and drug-like compounds in different matrices with DART-MS, including identification and quantitation issues. Parameters that affect ionization efficiency and mass spectra composition are also discussed. Copyright © 2011 Wiley Periodicals, Inc.

  9. Larger Bowl Size Increases the Amount of Cereal Children Request, Consume, and Waste

    NARCIS (Netherlands)

    Wansink, Brian; van Ittersum, Koert; Payne, Collin R.

    Objective To examine whether larger bowls bias children toward requesting more food from the adults who serve them. Study design Study 1 was a between-subject design involving 69 preschool-age children who were randomized to receive either a small (8 oz) or large (16 oz) cereal bowl and were asked

  10. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  11. Quantification of Campylobacter spp. in chicken rinse samples by using flotation prior to real-time PCR.

    Science.gov (United States)

    Wolffs, Petra; Norling, Börje; Hoorfar, Jeffrey; Griffiths, Mansel; Rådström, Peter

    2005-10-01

    Real-time PCR is fast, sensitive, specific, and can deliver quantitative data; however, two disadvantages are that this technology is sensitive to inhibition by food and that it does not distinguish between DNA originating from viable, viable nonculturable (VNC), and dead cells. For this reason, real-time PCR has been combined with a novel discontinuous buoyant density gradient method, called flotation, in order to allow detection of only viable and VNC cells of thermotolerant campylobacters in chicken rinse samples. Studying the buoyant densities of different Campylobacter spp. showed that densities changed at different time points during growth; however, all varied between 1.065 and 1.109 g/ml. These data were then used to develop a flotation assay. Results showed that after flotation and real-time PCR, cell concentrations as low as 8.6 x 10(2) CFU/ml could be detected without culture enrichment and amounts as low as 2.6 x 10(3) CFU/ml could be quantified. Furthermore, subjecting viable cells and dead cells to flotation showed that viable cells were recovered after flotation treatment but that dead cells and/or their DNA was not detected. Also, when samples containing VNC cells mixed with dead cells were treated with flotation after storage at 4 or 20 degrees C for 21 days, a similar percentage resembling the VNC cell fraction was detected using real-time PCR and 5-cyano-2,3-ditolyl tetrazolium chloride-4',6'-diamidino-2-phenylindole staining (20% +/- 9% and 23% +/- 4%, respectively, at 4 degrees C; 11% +/- 4% and 10% +/- 2%, respectively, at 20 degrees C). This indicated that viable and VNC Campylobacter cells could be positively selected and quantified using the flotation method.

  12. The influence of adhesive joint characteristics of the bonded samples of PUR-foam

    Directory of Open Access Journals (Sweden)

    Josef Pacovský

    2011-01-01

    Full Text Available Upholstered chairs and upholstered furniture in general, is largely produced primarily using PUR-foams, and it largely in the form of gluing several types of foam and himself on a firm surface - usually plywood or the agglomerated material – for the qualitative increase of upholstered furniture (including seating. Work deals with properties of the bond in connection with the influence on the final properties of the finished product, or even a change of functional properties in use over time. This work deals with: The influence of the characteristics of the adhesive used on samples bonded polyurethane foams. This work deals with properties of the bond in connection with an influence on the final properties of the finished product, or changes in functional properties when used at the time. The work is focused on: Effect of glue applied to the characteristics of the bonded samples of PUR foam. To determine the effects observed were used as the basis for the methodology based on the standard EN 1957, which was further modified as necessary. The results of the tests and conclusions can be stated that the incidence of bonded joints, ultimately, has a negligible effect on the resulting observed characteristics and therefore can cut and paste samples of smaller sizes into larger blocks without a fundamental change of the original features.

  13. PAVENET OS: A Compact Hard Real-Time Operating System for Precise Sampling in Wireless Sensor Networks

    Science.gov (United States)

    Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki

    The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).

  14. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  15. Protecting the larger fish: an ecological, economical and evolutionary analysis using a demographic model

    DEFF Research Database (Denmark)

    Verdiell, Nuria Calduch

    . Recently, there is increasing evidence that this size-selective fishing reduces the chances of maintaining populations at levels sufficient to produce maximum sustainable yields, the chances of recovery/rebuilding populations that have been depleted/collapsed and may causes rapid evolutionary changes...... and the consequent changes in yield. We attempt to evaluate the capability of the larger fish to mitigate the evolutionary change on life-history traits caused by fishing, while also maintaining a sustainable annual yield. This is achieved by calculating the expected selection response on three life-history traits......Many marine fish stocks are reported as overfished on a global scale. This overfishing not only removes fish biomass, but also causes dramatic changes in the age and size structure of fish stocks. In particular, targeting of the larger individuals truncates the age and size structure of stocks...

  16. Inorganic elements in sugar samples consumed in several countries

    International Nuclear Information System (INIS)

    Salles, P.M.B.; Campos, T.P.R.; Menezes, M.A. de B.C.; Jacimovic, Radojko

    2016-01-01

    Sugar is considered safe food ingredient, however, it can present inorganic elements as impurities uptake during cultivation and production process. Therefore, this study aimed at identifies the presence of these elements in granulated and brown sugar samples available for consumption in public places in several countries. The neutron activation technique applying the methodology to analyse larger samples, 5 g-sample, established at CDTN/CNEN based on k 0 -method was used to determine the elemental concentrations. Several essential and nonessential elements were determined in a large range of concentrations. The results are discussed comparing to maximum values foreseen in the international and Brazilian legislations. (author)

  17. Automating data analysis for two-dimensional gas chromatography/time-of-flight mass spectrometry non-targeted analysis of comparative samples.

    Science.gov (United States)

    Titaley, Ivan A; Ogba, O Maduka; Chibwe, Leah; Hoh, Eunha; Cheong, Paul H-Y; Simonich, Staci L Massey

    2018-03-16

    Non-targeted analysis of environmental samples, using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC/ToF-MS), poses significant data analysis challenges due to the large number of possible analytes. Non-targeted data analysis of complex mixtures is prone to human bias and is laborious, particularly for comparative environmental samples such as contaminated soil pre- and post-bioremediation. To address this research bottleneck, we developed OCTpy, a Python™ script that acts as a data reduction filter to automate GC × GC/ToF-MS data analysis from LECO ® ChromaTOF ® software and facilitates selection of analytes of interest based on peak area comparison between comparative samples. We used data from polycyclic aromatic hydrocarbon (PAH) contaminated soil, pre- and post-bioremediation, to assess the effectiveness of OCTpy in facilitating the selection of analytes that have formed or degraded following treatment. Using datasets from the soil extracts pre- and post-bioremediation, OCTpy selected, on average, 18% of the initial suggested analytes generated by the LECO ® ChromaTOF ® software Statistical Compare feature. Based on this list, 63-100% of the candidate analytes identified by a highly trained individual were also selected by OCTpy. This process was accomplished in several minutes per sample, whereas manual data analysis took several hours per sample. OCTpy automates the analysis of complex mixtures of comparative samples, reduces the potential for human error during heavy data handling and decreases data analysis time by at least tenfold. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Sampling genetic diversity in the sympatrically and allopatrically speciating Midas cichlid species complex over a 16 year time series

    Directory of Open Access Journals (Sweden)

    Bunje Paul ME

    2007-02-01

    Full Text Available Abstract Background Speciation often occurs in complex or uncertain temporal and spatial contexts. Processes such as reinforcement, allopatric divergence, and assortative mating can proceed at different rates and with different strengths as populations diverge. The Central American Midas cichlid fish species complex is an important case study for understanding the processes of speciation. Previous analyses have demonstrated that allopatric processes led to species formation among the lakes of Nicaragua as well as sympatric speciation that is occurring within at least one crater lake. However, since speciation is an ongoing process and sampling genetic diversity of such lineages can be biased by collection scheme or random factors, it is important to evaluate the robustness of conclusions drawn on individual time samples. Results In order to assess the validity and reliability of inferences based on different genetic samples, we have analyzed fish from several lakes in Nicaragua sampled at three different times over 16 years. In addition, this time series allows us to analyze the population genetic changes that have occurred between lakes, where allopatric speciation has operated, as well as between different species within lakes, some of which have originated by sympatric speciation. Focusing on commonly used genetic markers, we have analyzed both DNA sequences from the complete mitochondrial control region as well as nuclear DNA variation at ten microsatellite loci from these populations, sampled thrice in a 16 year time period, to develop a robust estimate of the population genetic history of these diversifying lineages. Conclusion The conclusions from previous work are well supported by our comprehensive analysis. In particular, we find that the genetic diversity of derived crater lake populations is lower than that of the source population regardless of when and how each population was sampled. Furthermore, changes in various estimates of

  19. A simple method for regional cerebral blood flow measurement by one-point arterial blood sampling and 123I-IMP microsphere model (part 2). A study of time correction of one-point blood sample count

    International Nuclear Information System (INIS)

    Masuda, Yasuhiko; Makino, Kenichi; Gotoh, Satoshi

    1999-01-01

    In our previous paper regarding determination of the regional cerebral blood flow (rCBF) using the 123 I-IMP microsphere model, we reported that the accuracy of determination of the integrated value of the input function from one-point arterial blood sampling can be increased by performing correction using the 5 min: 29 min ratio for the whole-brain count. However, failure to carry out the arterial blood collection at exactly 5 minutes after 123 I-IMP injection causes errors with this method, and there is thus a time limitation. We have now revised out method so that the one-point arterial blood sampling can be performed at any time during the interval between 5 minutes and 20 minutes after 123 I-IMP injection, with addition of a correction step for the sampling time. This revised method permits more accurate estimation of the integral of the input functions. This method was then applied to 174 experimental subjects: one-point blood samples collected at random times between 5 and 20 minutes, and the estimated values for the continuous arterial octanol extraction count (COC) were determined. The mean error rate between the COC and the actual measured continuous arterial octanol extraction count (OC) was 3.6%, and the standard deviation was 12.7%. Accordingly, in 70% of the cases, the rCBF was able to be estimated within an error rate of 13%, while estimation was possible in 95% of the cases within an error rate of 25%. This improved method is a simple technique for determination of the rCBF by 123 I-IMP microsphere model and one-point arterial blood sampling which no longer shows a time limitation and does not require any octanol extraction step. (author)

  20. Evaluation of Equations for Predicting 24-Hour Urinary Sodium Excretion from Casual Urine Samples in Asian Adults.

    Science.gov (United States)

    Whitton, Clare; Gay, Gibson Ming Wei; Lim, Raymond Boon Tar; Tan, Linda Wei Lin; Lim, Wei-Yen; van Dam, Rob M

    2016-08-01

    The collection of 24-h urine samples for the estimation of sodium intake is burdensome, and the utility of spot urine samples in Southeast Asian populations is unclear. We aimed to assess the validity of prediction equations with the use of spot urine concentrations. A sample of 144 Singapore residents of Chinese, Malay, and Indian ethnicity aged 18-79 y were recruited from the Singapore Health 2 Study conducted in 2014. Participants collected urine for 24 h in multiple small bottles on a single day. To determine the optimal collection time for a spot urine sample, a 1-mL sample was taken from a random bottle collected in the morning, afternoon, and evening. Published equations and a newly derived equation were used to predict 24-h sodium excretion from spot urine samples. The mean ± SD concentration of sodium from the 24-h urine sample was 125 ± 53.4 mmol/d, which is equivalent to 7.2 ± 3.1 g salt. Bland-Altman plots showed good agreement at the group level between estimated and actual 24-h sodium excretion, with biases for the morning period of -3.5 mmol (95% CI: -14.8, 7.8 mmol; new equation) and 1.46 mmol (95% CI: -10.0, 13.0 mmol; Intersalt equation). A larger bias of 25.7 mmol (95% CI: 12.2, 39.3 mmol) was observed for the Tanaka equation in the morning period. The prediction accuracy did not differ significantly for spot urine samples collected at different times of the day or at a random time of day (P = 0.11-0.76). This study suggests that the application of both our own newly derived equation and the Intersalt equation to spot urine concentrations may be useful in predicting group means for 24-h sodium excretion in urban Asian populations. © 2016 American Society for Nutrition.

  1. Parameterizing Spatial Models of Infectious Disease Transmission that Incorporate Infection Time Uncertainty Using Sampling-Based Likelihood Approximations.

    Directory of Open Access Journals (Sweden)

    Rajat Malik

    Full Text Available A class of discrete-time models of infectious disease spread, referred to as individual-level models (ILMs, are typically fitted in a Bayesian Markov chain Monte Carlo (MCMC framework. These models quantify probabilistic outcomes regarding the risk of infection of susceptible individuals due to various susceptibility and transmissibility factors, including their spatial distance from infectious individuals. The infectious pressure from infected individuals exerted on susceptible individuals is intrinsic to these ILMs. Unfortunately, quantifying this infectious pressure for data sets containing many individuals can be computationally burdensome, leading to a time-consuming likelihood calculation and, thus, computationally prohibitive MCMC-based analysis. This problem worsens when using data augmentation to allow for uncertainty in infection times. In this paper, we develop sampling methods that can be used to calculate a fast, approximate likelihood when fitting such disease models. A simple random sampling approach is initially considered followed by various spatially-stratified schemes. We test and compare the performance of our methods with both simulated data and data from the 2001 foot-and-mouth disease (FMD epidemic in the U.K. Our results indicate that substantial computation savings can be obtained--albeit, of course, with some information loss--suggesting that such techniques may be of use in the analysis of very large epidemic data sets.

  2. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  3. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  4. Short time-scale optical variability properties of the largest AGN sample observed with Kepler/K2

    Science.gov (United States)

    Aranzana, E.; Körding, E.; Uttley, P.; Scaringi, S.; Bloemen, S.

    2018-05-01

    We present the first short time-scale (˜hours to days) optical variability study of a large sample of active galactic nuclei (AGNs) observed with the Kepler/K2 mission. The sample contains 252 AGN observed over four campaigns with ˜30 min cadence selected from the Million Quasar Catalogue with R magnitude <19. We performed time series analysis to determine their variability properties by means of the power spectral densities (PSDs) and applied Monte Carlo techniques to find the best model parameters that fit the observed power spectra. A power-law model is sufficient to describe all the PSDs of our sample. A variety of power-law slopes were found indicating that there is not a universal slope for all AGNs. We find that the rest-frame amplitude variability in the frequency range of 6 × 10-6-10-4 Hz varies from 1to10 per cent with an average of 1.7 per cent. We explore correlations between the variability amplitude and key parameters of the AGN, finding a significant correlation of rest-frame short-term variability amplitude with redshift. We attribute this effect to the known `bluer when brighter' variability of quasars combined with the fixed bandpass of Kepler data. This study also enables us to distinguish between Seyferts and blazars and confirm AGN candidates. For our study, we have compared results obtained from light curves extracted using different aperture sizes and with and without detrending. We find that limited detrending of the optimal photometric precision light curve is the best approach, although some systematic effects still remain present.

  5. Soil sample moisture content as a function of time during oven drying for gamma-ray spectroscopic measurements

    International Nuclear Information System (INIS)

    Benke, R.R.; Kearfott, K.J.

    1999-01-01

    In routine gamma-ray spectroscopic analysis of collected soil samples, procedure often calls to remove soil moisture by oven drying overnight at a temperature of 100 deg. C . Oven drying not only minimizes the gamma-ray self-attenuation of soil samples due to the absence of water during the gamma-ray spectroscopic analysis, but also allows for a straightforward calculation of the specific activity of radionuclides in soil, historically based on the sample dry weight. Because radon exhalation is strongly dependent on moisture , knowledge of the oven-drying time dependence of the soil moisture content, combined with radon exhalation measurements during oven drying and at room temperature for varying soil moisture contents, would allow conclusions to be made on how the oven-drying radon exhalation rate depends on soil moisture content. Determinations of the oven-drying radon exhalation from soil samples allow corrections to be made for the immediate laboratory gamma-ray spectroscopy of radionuclides in the natural uranium decay chain. This paper presents the results of soil moisture content measurements during oven drying and suggests useful empirical fits to the moisture data

  6. Discrete-Time Systems

    Indian Academy of Sciences (India)

    We also describe discrete-time systems in terms of difference ... A more modern alternative, especially for larger systems, is to convert ... In other words, ..... picture?) State-variable equations are also called state-space equations because the ...

  7. Methods of biological fluids sample preparation - biogenic amines, methylxanthines, water-soluble vitamins.

    Science.gov (United States)

    Płonka, Joanna

    2015-01-01

    In recent years demands on the amount of information that can be obtained from the analysis of a single sample have increased. For time and economic reasons it is necessary to examine at the same time larger number of compounds, and compounds from different groups. This can best be seen in such areas as clinical analysis. In many diseases, the best results for patients are obtained when treatment fits the individual characteristics of the patient. Dosage monitoring is important at the beginning of therapy and in the full process of treatment. In the treatment of many diseases biogenic amines (dopamine, serotonin) and methylxanthines (theophylline, theobromine, caffeine) play an important role. They are used as drugs separately or in combination with others to support and strengthen the action of other drugs - for example, the combination of caffeine and paracetamol. Vitamin supplementation may be also an integral part of the treatment process. Specification of complete sample preparation parameters for extraction of the above compounds from biological matrices has been reviewed. Particular attention was given to the preparation stage and extraction methods. This review provides universal guidance on establishing a common procedures across laboratories to facilitate the preparation and analysis of all discussed compounds. Copyright © 2014 John Wiley & Sons, Ltd.

  8. A specialist toxicity database (TRACE) is more effective than its larger, commercially available counterparts

    NARCIS (Netherlands)

    Anderson, C.A.; Copestake, P.T.; Robinson, L.

    2000-01-01

    The retrieval precision and recall of a specialist bibliographic toxicity database (TRACE) and a range of widely available bibliographic databases used to identify toxicity papers were compared. The analysis indicated that the larger size and resources of the major bibliographic databases did not,

  9. Very extensive nonmaternal care predicts mother-infant attachment disorganization: Convergent evidence from two samples.

    Science.gov (United States)

    Hazen, Nancy L; Allen, Sydnye D; Christopher, Caroline Heaton; Umemura, Tomotaka; Jacobvitz, Deborah B

    2015-08-01

    We examined whether a maximum threshold of time spent in nonmaternal care exists, beyond which infants have an increased risk of forming a disorganized infant-mother attachment. The hours per week infants spent in nonmaternal care at 7-8 months were examined as a continuous measure and as a dichotomous threshold (over 40, 50 and 60 hr/week) to predict infant disorganization at 12-15 months. Two different samples (Austin and NICHD) were used to replicate findings and control for critical covariates: mothers' unresolved status and frightening behavior (assessed in the Austin sample, N = 125), quality of nonmaternal caregiving (assessed in the NICHD sample, N = 1,135), and family income and infant temperament (assessed in both samples). Only very extensive hours of nonmaternal care (over 60 hr/week) and mothers' frightening behavior independently predicted attachment disorganization. A polynomial logistic regression performed on the larger NICHD sample indicated that the risk of disorganized attachment exponentially increased after exceeding 60 hr/week. In addition, very extensive hours of nonmaternal care only predicted attachment disorganization after age 6 months (not prior). Findings suggest that during a sensitive period of attachment formation, infants who spend more than 60 hr/week in nonmaternal care may be at an increased risk of forming a disorganized attachment.

  10. Advanced Ignition System for Hybrid Rockets for Sample Return Missions, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — To return a sample from the surface of Mars or any of the larger moons in the solar system will require a propulsion system with a comparatively large delta-V...

  11. Efficiency and Reliability of Laparoscopic Partial Nephrectomy for Renal Tumors Larger than 4 cm

    Directory of Open Access Journals (Sweden)

    Faruk Özgör

    2015-03-01

    Full Text Available Aim: To evaluate safety and efficiency of laparoscopic partial nephrectomy for renal tumors larger than 4 cm. Methods: We retrospectivelly evaluated the medical records of 65 patients who underwent laparascopic partial nephrectomy between May 2009 and June 2013 in our clinic. The patients were divided into two groups according to tumor size. Patients with a tumor 4 cm were included in group 1 (n=45 and group 2 (n=20, respectively. Demographic, perioperative and postoperative parameters were compared between the groups. Histopathological examination and surgical margin status were also evaluated. Results: The mean age of the patients was 59.2±10.9 (range: 26- 81 years. The mean tumor size and the mean RENAL nephrometry score were significantly higher in group 2 than in group 1. The mean operation time and warm ischemia time were similar between groups but estimated blood loss and transfusion requirement were significantly higher in group 2. Convertion to open surgery was seen two patients in group 2 and one patient in group 1. Only one patient underwent radical nephrectomy for uncontrolled bleeding in group 2. There was no difference in preoperative and 3-month postoperative serum creatinine levels between the groups. The incidence of positive surgical margin was 0% and 5% in group 1 and group 2, respectively. Conclusion: Laparoscopic partial nephrectomy for renal tumors is an effective and feasible procedure with acceptable oncologic results. However, tranfusion rate and requiremet of pelvicaliceal system repair were more common in patients with tumor >4 cm. (The Medical Bulletin of Haseki 2015; 53:30-5

  12. Soil Gas Sample Handling: Evaluation of Water Removal and Sample Ganging

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Brad G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Abrecht, David G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hayes, James C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mendoza, Donaldo P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Soil gas sampling is currently conducted in support of Nuclear Test Ban treaty verification. Soil gas samples are collected and analyzed for isotopes of interest. Some issues that can impact sampling and analysis of these samples are excess moisture and sample processing time. Here we discuss three potential improvements to the current sampling protocol; a desiccant for water removal, use of molecular sieve to remove CO2 from the sample during collection, and a ganging manifold to allow composite analysis of multiple samples.

  13. Diagnostic performance of Schistosoma real-time PCR in urine samples from Kenyan children infected with Schistosoma haematobium

    DEFF Research Database (Denmark)

    Vinkeles Melchers, Natalie V. S.; van Dam, Govert J.; Shaproski, David

    2014-01-01

    treatment. METHODOLOGY: Previously collected urine samples (N = 390) from 114 preselected proven parasitological and/or clinical S. haematobium positive Kenyan schoolchildren were analyzed by a Schistosoma internal transcribed spacer-based real-time PCR after 14 years of storage. Pre-treatment day......, respectively. Based on the 'gold standard', PCR showed high sensitivity (>92%) as compared to >31% sensitivity for microscopy, both pre- and post-treatment. CONCLUSIONS/SIGNIFICANCE: Detection and quantification of Schistosoma DNA in urine by real-time PCR was shown to be a powerful and specific diagnostic...

  14. A Schistosoma haematobium-specific real-time PCR for diagnosis of urogenital schistosomiasis in serum samples of international travelers and migrants.

    Science.gov (United States)

    Cnops, Lieselotte; Soentjens, Patrick; Clerinx, Jan; Van Esbroeck, Marjan

    2013-01-01

    Diagnosis of urogenital schistosomiasis by microscopy and serological tests may be elusive in travelers due to low egg load and the absence of seroconversion upon arrival. There is need for a more sensitive diagnostic test. Therefore, we developed a real-time PCR targeting the Schistosoma haematobium-specific Dra1 sequence. The PCR was evaluated on urine (n = 111), stool (n = 84) and serum samples (n = 135), and one biopsy from travelers and migrants with confirmed or suspected schistosomiasis. PCR revealed a positive result in 7/7 urine samples, 11/11 stool samples and 1/1 biopsy containing S. haematobium eggs as demonstrated by microscopy and in 22/23 serum samples from patients with a parasitological confirmed S. haematobium infection. S. haematobium DNA was additionally detected by PCR in 7 urine, 3 stool and 5 serum samples of patients suspected of having schistosomiasis without egg excretion in urine and feces. None of these suspected patients demonstrated other parasitic infections except one with Blastocystis hominis and Entamoeba cyst in a fecal sample. The PCR was negative in all stool samples containing S. mansoni eggs (n = 21) and in all serum samples of patients with a microscopically confirmed S. mansoni (n = 22), Ascaris lumbricoides (n = 1), Ancylostomidae (n = 1), Strongyloides stercoralis (n = 1) or Trichuris trichuria infection (n = 1). The PCR demonstrated a high specificity, reproducibility and analytical sensitivity (0.5 eggs per gram of feces). The real-time PCR targeting the Dra1 sequence for S. haematobium-specific detection in urine, feces, and particularly serum, is a promising tool to confirm the diagnosis, also during the acute phase of urogenital schistosomiasis.

  15. Water born pollutants sampling using porous suction samples

    International Nuclear Information System (INIS)

    Baig, M.A.

    1997-01-01

    The common standard method of sampling water born pollutants in the vadoze zone is core sampling and it is followed by extraction of pore fluid. This method does not allow sampling at the same location next time and again later on. There is an alternative approach for sampling fluids (water born pollutants) from both saturated and unsaturated regions of vadose zone using porous suction samplers. There are three types of porous suction samplers, vacuum-operated, pressure-vacuum lysimeters, high pressure vacuum samples. The suction samples are operated in the range of 0-70 centi bars and usually consist of ceramic and polytetrafluorethylene (PTFE). The operation range of PTFE is higher than ceramic cups. These samplers are well suited for in situ and repeated sampling form the same location. This paper discusses the physical properties and operating condition of such samplers to the utilized under our environmental sampling. (author)

  16. Discrete- vs. Continuous-Time Modeling of Unequally Spaced Experience Sampling Method Data

    Directory of Open Access Journals (Sweden)

    Silvia de Haan-Rietdijk

    2017-10-01

    Full Text Available The Experience Sampling Method is a common approach in psychological research for collecting intensive longitudinal data with high ecological validity. One characteristic of ESM data is that it is often unequally spaced, because the measurement intervals within a day are deliberately varied, and measurement continues over several days. This poses a problem for discrete-time (DT modeling approaches, which are based on the assumption that all measurements are equally spaced. Nevertheless, DT approaches such as (vector autoregressive modeling are often used to analyze ESM data, for instance in the context of affective dynamics research. There are equivalent continuous-time (CT models, but they are more difficult to implement. In this paper we take a pragmatic approach and evaluate the practical relevance of the violated model assumption in DT AR(1 and VAR(1 models, for the N = 1 case. We use simulated data under an ESM measurement design to investigate the bias in the parameters of interest under four different model implementations, ranging from the true CT model that accounts for all the exact measurement times, to the crudest possible DT model implementation, where even the nighttime is treated as a regular interval. An analysis of empirical affect data illustrates how the differences between DT and CT modeling can play out in practice. We find that the size and the direction of the bias in DT (VAR models for unequally spaced ESM data depend quite strongly on the true parameter in addition to data characteristics. Our recommendation is to use CT modeling whenever possible, especially now that new software implementations have become available.

  17. Efficient Sampling of the Structure of Crypto Generators' State Transition Graphs

    Science.gov (United States)

    Keller, Jörg

    Cryptographic generators, e.g. stream cipher generators like the A5/1 used in GSM networks or pseudo-random number generators, are widely used in cryptographic network protocols. Basically, they are finite state machines with deterministic transition functions. Their state transition graphs typically cannot be analyzed analytically, nor can they be explored completely because of their size which typically is at least n = 264. Yet, their structure, i.e. number and sizes of weakly connected components, is of interest because a structure deviating significantly from expected values for random graphs may form a distinguishing attack that indicates a weakness or backdoor. By sampling, one randomly chooses k nodes, derives their distribution onto connected components by graph exploration, and extrapolates these results to the complete graph. In known algorithms, the computational cost to determine the component for one randomly chosen node is up to O(√n), which severely restricts the sample size k. We present an algorithm where the computational cost to find the connected component for one randomly chosen node is O(1), so that a much larger sample size k can be analyzed in a given time. We report on the performance of a prototype implementation, and about preliminary analysis for several generators.

  18. New nonbinary quantum codes with larger distance constructed from BCH codes over 𝔽q2

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Fu, Qiang; Ma, Yuena; Guo, Luobin

    2017-03-01

    This paper concentrates on construction of new nonbinary quantum error-correcting codes (QECCs) from three classes of narrow-sense imprimitive BCH codes over finite field 𝔽q2 (q ≥ 3 is an odd prime power). By a careful analysis on properties of cyclotomic cosets in defining set T of these BCH codes, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing BCH codes is determined to be much larger than the result given according to Aly et al. [S. A. Aly, A. Klappenecker and P. K. Sarvepalli, IEEE Trans. Inf. Theory 53, 1183 (2007)] for each different code length. Thus families of new nonbinary QECCs are constructed, and the newly obtained QECCs have larger distance than those in previous literature.

  19. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  20. A neural algorithm for the non-uniform and adaptive sampling of biomedical data.

    Science.gov (United States)

    Mesin, Luca

    2016-04-01

    Body sensors are finding increasing applications in the self-monitoring for health-care and in the remote surveillance of sensitive people. The physiological data to be sampled can be non-stationary, with bursts of high amplitude and frequency content providing most information. Such data could be sampled efficiently with a non-uniform schedule that increases the sampling rate only during activity bursts. A real time and adaptive algorithm is proposed to select the sampling rate, in order to reduce the number of measured samples, but still recording the main information. The algorithm is based on a neural network which predicts the subsequent samples and their uncertainties, requiring a measurement only when the risk of the prediction is larger than a selectable threshold. Four examples of application to biomedical data are discussed: electromyogram, electrocardiogram, electroencephalogram, and body acceleration. Sampling rates are reduced under the Nyquist limit, still preserving an accurate representation of the data and of their power spectral densities (PSD). For example, sampling at 60% of the Nyquist frequency, the percentage average rectified errors in estimating the signals are on the order of 10% and the PSD is fairly represented, until the highest frequencies. The method outperforms both uniform sampling and compressive sensing applied to the same data. The discussed method allows to go beyond Nyquist limit, still preserving the information content of non-stationary biomedical signals. It could find applications in body sensor networks to lower the number of wireless communications (saving sensor power) and to reduce the occupation of memory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. HIPAA is larger and more complex than Y2K.

    Science.gov (United States)

    Tempesco, J W

    2000-07-01

    The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a larger and more complex problem than Y2K ever was. According to the author, the costs associated with a project of such unending scope and in support of intrusion into both information and operational systems of every health care transaction will be incalculable. Some estimate that the administrative simplification policies implemented through HIPAA will save billions of dollars annually, but it remains to be seen whether the savings will outweigh implementation and ongoing expenses associated with systemwide application of the regulations. This article addresses the rules established for electronic data interchange, data set standards for diagnostic and procedure codes, unique identifiers, coordination of benefits, privacy of individual health care information, electronic signatures, and security requirements.

  2. Designing key-dependent chaotic S-box with larger key space

    International Nuclear Information System (INIS)

    Yin Ruming; Yuan Jian; Wang Jian; Shan Xiuming; Wang Xiqin

    2009-01-01

    The construction of cryptographically strong substitution boxes (S-boxes) is an important concern in designing secure cryptosystems. The key-dependent S-boxes designed using chaotic maps have received increasing attention in recent years. However, the key space of such S-boxes does not seem to be sufficiently large due to the limited parameter range of discretized chaotic maps. In this paper, we propose a new key-dependent S-box based on the iteration of continuous chaotic maps. We explore the continuous-valued state space of chaotic systems, and devise the discrete mapping between the input and the output of the S-box. A key-dependent S-box is constructed with the logistic map in this paper. We show that its key space could be much larger than the current key-dependent chaotic S-boxes.

  3. Time and Money - Are they Substitutes?

    DEFF Research Database (Denmark)

    Bonke, Jens; Deding, Mette; Lausten, Mette

    In this paper, we analyse the distribution of time and money for Danish wage earner couples, where time is defined as leisure time and money as extended income, i.e. the sum of disposable income and the value of housework. The hypothesis is that individuals being rich in one dimension are more...... likely to be poor in the other dimension, such that individuals can be classified as either money-poor/time-rich or money-rich/time-poor. We analyse two different distributions of income, where the first assumes no sharing and the second complete sharing of income between spouses. The data are from...... the Danish Time-Use Survey 2001, merged with register data. Results show that the substitution of money for time is more prominent for women than for men, because they have a larger income share of time-intensive value of housework, while men have the larger share of disposable income. Furthermore, when...

  4. Comparison of the Abbott RealTime High Risk HPV test and the Roche cobas 4800 HPV test using urine samples.

    Science.gov (United States)

    Lim, Myong Cheol; Lee, Do-Hoon; Hwang, Sang-Hyun; Hwang, Na Rae; Lee, Bomyee; Shin, Hye Young; Jun, Jae Kwan; Yoo, Chong Woo; Lee, Dong Ock; Seo, Sang-Soo; Park, Sang-Yoon; Joo, Jungnam

    2017-05-01

    Human papillomavirus (HPV) testing based on cervical samples is important for use in cervical cancer screening. However, cervical sampling is invasive. Therefore, non-invasive methods for detecting HPV, such as urine samples, are needed. For HPV detection in urine samples, two real-time PCR (RQ-PCR) tests, Roche cobas 4800 test (Roche_HPV; Roche Molecular Diagnostics) and Abbott RealTime High Risk HPV test (Abbott_HPV; Abbott Laboratories) were compared to standard cervical samples. The performance of Roche_HPV and Abbott_HPV for HPV detection was evaluated at the National Cancer Center using 100 paired cervical and urine samples. The tests were also compared using urine samples stored at various temperatures and for a range of durations. The overall agreement between the Roche_HPV and Abbott_HPV tests using urine samples for any hrHPV type was substantial (86.0% with a kappa value of 0.7173), and that for HPV 16/18 was nearly perfect (99.0% with a kappa value of 0.9668). The relative sensitivities (based on cervical samples) for HPV 16/18 detection using Roche_HPV and Abbott_HPV with urine samples were 79.2% (95% CI; 57.9-92.9%) and 81.8% (95% CI; 59.7-94.8%), respectively. When the cut-off C T value for Abbott_HPV was extended to 40 for urine samples, the relative sensitivity of Abbott_HPV increased to 91.7% from 81.8% for HPV16/18 detection and to 87.0% from 68.5% for other hrHPV detection. The specificity was not affected by the change in the C T threshold. Roche_HPV and Abbott_HPV showed high concordance. However, HPV DNA detection using urine samples was inferior to HPV DNA detection using cervical samples. Interestingly, when the cut-off C T value was set to 40, Abbott_HPV using urine samples showed high sensitivity and specificity, comparable to those obtained using cervical samples. Fully automated DNA extraction and detection systems, such as Roche_HPV and Abbott_HPV, could reduce the variability in HPV detection and accelerate the standardization of HPV

  5. Flux Dynamics and Time Effects in a Carved out Superconducting Polycrystalline Bi-Sr-Ca-Cu-O Sample

    Energy Technology Data Exchange (ETDEWEB)

    Olutas, M [Abant Izzet Baysal University, Department of Physics, Turgut Gulez Research Laboratory 14280 Bolu (Turkey); Yetis, H [Abant Izzet Baysal University, Department of Physics, Turgut Gulez Research Laboratory 14280 Bolu (Turkey); Altinkok, A [Abant Izzet Baysal University, Department of Physics, Turgut Gulez Research Laboratory 14280 Bolu (Turkey); Soezeri, H [National Metrology Institute TUBITAK PO Box 21, 41470, Gebze-Kocaeli (Turkey); Kilic, K [Abant Izzet Baysal University, Department of Physics, Turgut Gulez Research Laboratory 14280 Bolu (Turkey); Kilic, A [Abant Izzet Baysal University, Department of Physics, Turgut Gulez Research Laboratory 14280 Bolu (Turkey); Cetin, O [Abant Izzet Baysal University, Department of Physics, Turgut Gulez Research Laboratory 14280 Bolu (Turkey)

    2006-06-01

    Systematic slow transport relaxation (V-t curves) and magnetovoltage measurements (V-H curves) have been carried out in a carved out superconducting polycrystalline Bi-Sr-Ca-Cu-O sample as a function of current (I), temperature (T), and external field (H). The V-t curves reveal the details of the time evolution of the penetrated state within the granular structure of the sample and also give a direct evidence of the relaxation of the flux trapped inside the drilled hole on the time scale of the experiment. On the other hand, V-H curves exhibit several unusual interesting properties upon cycling of the external magnetic field in forward and reverse directions, and, in addition to irreversibilities, strong reversible effects are observed, which is associated with the trapping of the macroscopic flux bundles in the drilled hole. It is also observed that the field sweep rate influences dramatically the reversible and irreversible behavior of V-H curves. The experimental results were mainly interpreted in terms of current and field induced organization of the vortices.

  6. Flux Dynamics and Time Effects in a Carved out Superconducting Polycrystalline Bi-Sr-Ca-Cu-O Sample

    International Nuclear Information System (INIS)

    Olutas, M; Yetis, H; Altinkok, A; Soezeri, H; Kilic, K; Kilic, A; Cetin, O

    2006-01-01

    Systematic slow transport relaxation (V-t curves) and magnetovoltage measurements (V-H curves) have been carried out in a carved out superconducting polycrystalline Bi-Sr-Ca-Cu-O sample as a function of current (I), temperature (T), and external field (H). The V-t curves reveal the details of the time evolution of the penetrated state within the granular structure of the sample and also give a direct evidence of the relaxation of the flux trapped inside the drilled hole on the time scale of the experiment. On the other hand, V-H curves exhibit several unusual interesting properties upon cycling of the external magnetic field in forward and reverse directions, and, in addition to irreversibilities, strong reversible effects are observed, which is associated with the trapping of the macroscopic flux bundles in the drilled hole. It is also observed that the field sweep rate influences dramatically the reversible and irreversible behavior of V-H curves. The experimental results were mainly interpreted in terms of current and field induced organization of the vortices

  7. Rapid detection of Opisthorchis viverrini and Strongyloides stercoralis in human fecal samples using a duplex real-time PCR and melting curve analysis.

    Science.gov (United States)

    Janwan, Penchom; Intapan, Pewpan M; Thanchomnang, Tongjit; Lulitanond, Viraphong; Anamnart, Witthaya; Maleewong, Wanchai

    2011-12-01

    Human opisthorchiasis caused by the liver fluke Opisthorchis viverrini is an endemic disease in Southeast Asian countries including the Lao People's Democratic Republic, Cambodia, Vietnam, and Thailand. Infection with the soil-transmitted roundworm Strongyloides stercoralis is an important problem worldwide. In some areas, both parasitic infections are reported as co-infections. A duplex real-time fluorescence resonance energy transfer (FRET) PCR merged with melting curve analysis was developed for the rapid detection of O. viverrini and S. stercoralis in human fecal samples. Duplex real-time FRET PCR is based on fluorescence melting curve analysis of a hybrid of amplicons generated from two genera of DNA elements: the 162 bp pOV-A6 DNA sequence specific to O. viverrini and the 244 bp 18S rRNA sequence specific to S. stercoralis, and two pairs of specific fluorophore-labeled probes. Both O. viverrini and S. stercoralis can be differentially detected in infected human fecal samples by this process through their different fluorescence channels and melting temperatures. Detection limit of the method was as little as two O. viverrini eggs and four S. stercoralis larvae in 100 mg of fecal sample. The assay could distinguish the DNA of both parasites from the DNA of negative fecal samples and fecal samples with other parasite materials, as well as from the DNA of human leukocytes and other control parasites. The technique showed 100% sensitivity and specificity. The introduced duplex real-time FRET PCR can reduce labor time and reagent costs and is not prone to carry over contamination. The method is important for simultaneous detection especially in areas where both parasites overlap incidence and is useful as the screening tool in the returning travelers and immigrants to industrialized countries where number of samples in the diagnostic units will become increasing.

  8. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  9. Robust nonhomogeneous training samples detection method for space-time adaptive processing radar using sparse-recovery with knowledge-aided

    Science.gov (United States)

    Li, Zhihui; Liu, Hanwei; Zhang, Yongshun; Guo, Yiduo

    2017-10-01

    The performance of space-time adaptive processing (STAP) may degrade significantly when some of the training samples are contaminated by the signal-like components (outliers) in nonhomogeneous clutter environments. To remove the training samples contaminated by outliers in nonhomogeneous clutter environments, a robust nonhomogeneous training samples detection method using the sparse-recovery (SR) with knowledge-aided (KA) is proposed. First, the reduced-dimension (RD) overcomplete spatial-temporal steering dictionary is designed with the prior knowledge of system parameters and the possible target region. Then, the clutter covariance matrix (CCM) of cell under test is efficiently estimated using a modified focal underdetermined system solver (FOCUSS) algorithm, where a RD overcomplete spatial-temporal steering dictionary is applied. Third, the proposed statistics are formed by combining the estimated CCM with the generalized inner products (GIP) method, and the contaminated training samples can be detected and removed. Finally, several simulation results validate the effectiveness of the proposed KA-SR-GIP method.

  10. Effect of Different Sampling Schedules on Results of Bioavailability and Bioequivalence Studies: Evaluation by Means of Monte Carlo Simulations.

    Science.gov (United States)

    Kano, Eunice Kazue; Chiann, Chang; Fukuda, Kazuo; Porta, Valentina

    2017-08-01

    Bioavailability and bioequivalence study is one of the most frequently performed investigations in clinical trials. Bioequivalence testing is based on the assumption that 2 drug products will be therapeutically equivalent when they are equivalent in the rate and extent to which the active drug ingredient or therapeutic moiety is absorbed and becomes available at the site of drug action. In recent years there has been a significant growth in published papers that use in silico studies based on mathematical simulations to analyze pharmacokinetic and pharmacodynamic properties of drugs, including bioavailability and bioequivalence aspects. The goal of this study is to evaluate the usefulness of in silico studies as a tool in the planning of bioequivalence, bioavailability and other pharmacokinetic assays, e.g., to determine an appropriate sampling schedule. Monte Carlo simulations were used to define adequate blood sampling schedules for a bioequivalence assay comparing 2 different formulations of cefadroxil oral suspensions. In silico bioequivalence studies comparing different formulation of cefadroxil oral suspensions using various sampling schedules were performed using models. An in vivo study was conducted to confirm in silico results. The results of in silico and in vivo bioequivalence studies demonstrated that schedules with fewer sampling times are as efficient as schedules with larger numbers of sampling times in the assessment of bioequivalence, but only if T max is included as a sampling time. It was also concluded that in silico studies are useful tools in the planning of bioequivalence, bioavailability and other pharmacokinetic in vivo assays. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Sampling of ore

    International Nuclear Information System (INIS)

    Boehme, R.C.; Nicholas, B.L.

    1987-01-01

    This invention relates to a method of an apparatus for ore sampling. The method includes the steps of periodically removing a sample of the output material of a sorting machine, weighing each sample so that each is of the same weight, measuring a characteristic such as the radioactivity, magnetivity or the like of each sample, subjecting at least an equal portion of each sample to chemical analysis to determine the mineral content of the sample and comparing the characteristic measurement with desired mineral content of the chemically analysed portion of the sample to determine the characteristic/mineral ratio of the sample. The apparatus includes an ore sample collector, a deflector for deflecting a sample of ore particles from the output of an ore sorter into the collector and means for moving the deflector from a first position in which it is clear of the particle path from the sorter to a second position in which it is in the particle path at predetermined time intervals and for predetermined time periods to deflect the sample particles into the collector. The apparatus conveniently includes an ore crusher for comminuting the sample particle, a sample hopper means for weighing the hopper, a detector in the hopper for measuring a characteristic such as radioactivity, magnetivity or the like of particles in the hopper, a discharge outlet from the hopper and means for feeding the particles from the collector to the crusher and then to the hopper

  12. Clinical evaluation of the Abbott RealTime MTB Assay for direct detection of Mycobacterium tuberculosis-complex from respiratory and non-respiratory samples.

    Science.gov (United States)

    Hinić, Vladimira; Feuz, Kinga; Turan, Selda; Berini, Andrea; Frei, Reno; Pfeifer, Karin; Goldenberger, Daniel

    2017-05-01

    Rapid and reliable diagnosis is crucial for correct management of tuberculosis. The Abbott RealTime MTB Assay represents a novel qualitative real-time PCR assay for direct detection of M. tuberculosis-complex (MTB) DNA from respiratory samples. The test targets two highly conserved sequences, the multi-copy insertion element IS6110 and the protein antigen B (PAB) gene of MTB, allowing even the detection of IS6610-deficient strains. We evaluated this commercial diagnostic test by analyzing 200 respiratory and, for the first time, 87 non-respiratory clinical specimens from our tertiary care institution and compared its results to our IS6110-based in-house real-time PCR for MTB as well as MTB culture. Overall sensitivity for Abbott RealTime MTB was 100% (19/19) in smear positive and 87.5% (7/8) in smear negative specimens, while the specificity of the assay was 100% (260/260). For both non-respiratory smear positive and smear negative specimens Abbott RealTime MTB tests showed 100% (8/8) sensitivity and 100% (8/8) specificity. Cycle threshold (Ct) value analysis of 16 MTB positive samples showed a slightly higher Ct value of the Abbott RealTime MTB test compared to our in-house MTB assay (mean delta Ct = 2.55). In conclusion, the performance of the new Abbott RealTime MTB Assay was highly similar to culture and in-house MTB PCR. We document successful analysis of 87 non-respiratory samples with the highly automated Abbott RealTime MTB test with no inhibition observed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  14. The effects of forest conversion to oil palm on ground-foraging ant communities depend on beta diversity and sampling grain.

    Science.gov (United States)

    Wang, Wendy Y; Foster, William A

    2015-08-01

    Beta diversity - the variation in species composition among spatially discrete communities - and sampling grain - the size of samples being compared - may alter our perspectives of diversity within and between landscapes before and after agricultural conversion. Such assumptions are usually based on point comparisons, which do not accurately capture actual differences in total diversity. Beta diversity is often not rigorously examined. We investigated the beta diversity of ground-foraging ant communities in fragmented oil palm and forest landscapes in Sabah, Malaysia, using diversity metrics transformed from Hill number equivalents to remove dependences on alpha diversity. We compared the beta diversities of oil palm and forest, across three hierarchically nested sampling grains. We found that oil palm and forest communities had a greater percentage of total shared species when larger samples were compared. Across all grains and disregarding relative abundances, there was higher beta diversity of all species among forest communities. However, there were higher beta diversities of common and very abundant (dominant) species in oil palm as compared to forests. Differences in beta diversities between oil palm and forest were greatest at the largest sampling grain. Larger sampling grains in oil palm may generate bigger species pools, increasing the probability of shared species with forest samples. Greater beta diversity of all species in forest may be attributed to rare species. Oil palm communities may be more heterogeneous in common and dominant species because of variable community assembly events. Rare and also common species are better captured at larger grains, boosting differences in beta diversity between larger samples of forest and oil palm communities. Although agricultural landscapes support a lower total diversity than natural forests, diversity especially of abundant species is still important for maintaining ecosystem stability. Diversity in

  15. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    Directory of Open Access Journals (Sweden)

    Nordström Henrik

    2012-05-01

    Full Text Available Abstract Background In research on event-related potentials (ERP to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN and the late positive potential (LPP. Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes.

  16. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    Science.gov (United States)

    2012-01-01

    Background In research on event-related potentials (ERP) to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention) is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN) and the late positive potential (LPP). Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes. PMID:22607397

  17. Relative sensitivity of conventional and real-time PCR assays for detection of SFG Rickettsia in blood and tissue samples from laboratory animals.

    Science.gov (United States)

    Zemtsova, Galina E; Montgomery, Merrill; Levin, Michael L

    2015-01-01

    Studies on the natural transmission cycles of zoonotic pathogens and the reservoir competence of vertebrate hosts require methods for reliable diagnosis of infection in wild and laboratory animals. Several PCR-based applications have been developed for detection of infections caused by Spotted Fever group Rickettsia spp. in a variety of animal tissues. These assays are being widely used by researchers, but they differ in their sensitivity and reliability. We compared the sensitivity of five previously published conventional PCR assays and one SYBR green-based real-time PCR assay for the detection of rickettsial DNA in blood and tissue samples from Rickettsia- infected laboratory animals (n = 87). The real-time PCR, which detected rickettsial DNA in 37.9% of samples, was the most sensitive. The next best were the semi-nested ompA assay and rpoB conventional PCR, which detected as positive 18.4% and 14.9% samples respectively. Conventional assays targeting ompB, gltA and hrtA genes have been the least sensitive. Therefore, we recommend the SYBR green-based real-time PCR as a tool for the detection of rickettsial DNA in animal samples due to its higher sensitivity when compared to more traditional assays.

  18. The transabdominal chorionic villus sampling puncture guided by color Doppler ultrasound during early pregnancy

    International Nuclear Information System (INIS)

    Liang Weixiang; Chen Zhiyi; Yuan Wenlin; Cai Kuan; Zhu Junlin; Wang Weiqun; Chen Xia

    2008-01-01

    Objective: To study the operation of chorionic villus sampling (CVS) guided by color Doppler ultrasound (CDU) via abdomen puncture during early pregnancy and investigate the advertences during the operation. Methods: CVS guided by CDU probe via abdomen puncture were operated on 28 pregnant women who had the indications of antenatal diagnosis. CDU was used to observe the implantation position of the fo1iaceous villis and help setting mark of the puncture point and puncture range on body surface before operation. The needle was punctured under real-time ultrasound guidance and villis were aspirated during the operation: The choice of the right time of puncture and the operation skills were emphasized in the study. Results: The CVS puncture approach should be set through CDU ob servation, which attend to avoid the surrounding blood vessels, intestinal canal and surrounding important organs. The puncture point should be chosen in a point where lobif0rmed villis distributed wider and with a larger scope. The operations were performed from 10 to13 weeks of pregnancy, with an average of 11 weeks. Among these 28 cases, 9.6 were successfully drawn materials in one time, 1 in twice and l failure, with the total ratio of achievement was 96.4%. For all the cases, fetal heart pulsating could be seen by real-time CDU observation fight after the operation, and no larger hematoma echo in the placental site occurred. Ultrasound reexamined one week after the operation, fetal heart pulsation could be found in all cases, and no abortion cases occurred after regular follow-up in 25 continued pregnant patients. Conclusion: Abdominal CVS puncture guided by CDU probe is conveniently operated, safe and available in clinic. It is an important method for antenatal diagnosis during early pregnancy. The puncture localization, skills and the time are the key points for the success in obtaining the materials. (authors)

  19. A multiple sampling time projection ionization chamber for nuclear fragment tracking and charge measurement

    International Nuclear Information System (INIS)

    Bauer, G.; Bieser, F.; Brady, F.P.; Chance, J.C.; Christie, W.F.; Gilkes, M.; Lindenstruth, V.; Lynen, U.; Mueller, W.F.J.; Romero, J.L.; Sann, H.; Tull, C.E.; Warren, P.

    1997-01-01

    A detector has been developed for the tracking and charge measurement of the projectile fragment nuclei produced in relativistic nuclear collisions. This device, MUSIC II, is a second generation Multiple Sampling Ionization Chamber (MUSIC), and employs the principles of ionization and time projection chambers. It provides unique charge determination for charges Z≥6, and excellent track position measurement. MUSIC II has been used most recently with the EOS (equation of state) TPC and other EOS collaboration detectors. Earlier it was used with other systems in experiments at the Heavy Ion Superconducting Spectrometer (HISS) facility at Lawrence Berkeley Laboratory and the ALADIN spectrometer at GSI. (orig.)

  20. Quantification of bitumen particles in aerosol and soil samples using HP-GPC

    DEFF Research Database (Denmark)

    Fauser, Patrik; Tjell, Jens Christian; Mosbæk, Hans

    2000-01-01

    A method for identifying and quantifying bitumen particles, generated from the wear of roadway asphalts, in aerosol and soil samples has been developed. Bitumen is found to be the only contributor to airborne particles containing organic molecules with molecular weights larger than 2000 g pr. mol....... These are separated and identified using High Performance Gel Permeation Chromatography (HP-GPC) with fluorescence detection. As an additional detection method Infra Red spectrometry (IR) is employed for selected samples. The methods have been used on aerosol, soil and other samples....

  1. Examples of fatigue lifetime and reliability evaluation of larger wind turbine components

    DEFF Research Database (Denmark)

    Tarp-Johansen, N.J.

    2003-01-01

    This report is one out of several that constitute the final report on the ELSAM funded PSO project “Vindmøllekomponenters udmattelsesstyrke og levetid”, project no. 2079, which regards the lifetime distribution of larger wind turbine components in ageneric turbine that has real life dimensions....... Though it was the initial intention of the project to consider only the distribution of lifetimes the work reported in this document provides also calculations of reliabilities and partial load safetyfactors under specific assumptions about uncertainty sources, as reliabilities are considered...

  2. Suppression dampens unpleasant emotion faster than reappraisal: Neural dynamics in a Chinese sample.

    Science.gov (United States)

    Yuan, JiaJin; Long, QuanShan; Ding, NanXiang; Lou, YiXue; Liu, YingYing; Yang, JieMin

    2015-05-01

    The timing dynamics of regulating negative emotion with expressive suppression and cognitive reappraisal were investigated in a Chinese sample. Event-Related Potentials were recorded while subjects were required to view, suppress emotion expression to, or reappraise emotional pictures. The results showed a similar reduction in self-reported negative emotion during both strategies. Additionally, expressive suppression elicited larger amplitudes than reappraisal in central-frontal P3 component (340-480 ms). More importantly, the Late Positive Potential (LPP) amplitudes were decreased in each 200 ms of the 800-1600 ms time intervals during suppression vs. viewing conditions. In contrast, LPP amplitudes were similar for reappraisal and viewing conditions in all the time windows, except for the decreased amplitudes during reappraisal in the 1400-1600 ms. The LPP (but not P3) amplitudes were positively related to negative mood ratings, whereas the amplitudes of P3, rather than LPP, predict self-reported expressive suppression. These results suggest that expressive suppression decreases emotion responding more rapidly than reappraisal, at the cost of greater cognitive resource involvements in Chinese individuals.

  3. Detecting oscillatory patterns and time lags from proxy records with non-uniform sampling: Some pitfalls and possible solutions

    Science.gov (United States)

    Donner, Reik

    2013-04-01

    Time series analysis offers a rich toolbox for deciphering information from high-resolution geological and geomorphological archives and linking the thus obtained results to distinct climate and environmental processes. Specifically, on various time-scales from inter-annual to multi-millenial, underlying driving forces exhibit more or less periodic oscillations, the detection of which in proxy records often allows linking them to specific mechanisms by which the corresponding drivers may have affected the archive under study. A persistent problem in geomorphology is that available records do not present a clear signal of the variability of environmental conditions, but exhibit considerable uncertainties of both the measured proxy variables and the associated age model. Particularly, time-scale uncertainty as well as the heterogeneity of sampling in the time domain are source of severe conceptual problems that may lead to false conclusions about the presence or absence of oscillatory patterns and their mutual phasing in different archives. In my presentation, I will discuss how one can cope with non-uniformly sampled proxy records to detect and quantify oscillatory patterns in one or more data sets. For this purpose, correlation analysis is reformulated using kernel estimates which are found superior to classical estimators based on interpolation or Fourier transform techniques. In order to characterize non-stationary or noisy periodicities and their relative phasing between different records, an extension of continuous wavelet transform is utilized. The performance of both methods is illustrated for different case studies. An extension to explicitly considering time-scale uncertainties by means of Bayesian techniques is briefly outlined.

  4. Synchronization of a Class of Memristive Stochastic Bidirectional Associative Memory Neural Networks with Mixed Time-Varying Delays via Sampled-Data Control

    Directory of Open Access Journals (Sweden)

    Manman Yuan

    2018-01-01

    Full Text Available The paper addresses the issue of synchronization of memristive bidirectional associative memory neural networks (MBAMNNs with mixed time-varying delays and stochastic perturbation via a sampled-data controller. First, we propose a new model of MBAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying distributed delays and discrete delays. Second, we design a new method of sampled-data control for the stochastic MBAMNNs. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the methods are carefully designed to confirm the synchronization processes are suitable for the feather of the memristor. Third, sufficient criteria guaranteeing the synchronization of the systems are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.

  5. Comparison of Nested Polymerase Chain Reaction and Real-Time Polymerase Chain Reaction with Parasitological Methods for Detection of Strongyloides stercoralis in Human Fecal Samples

    Science.gov (United States)

    Sharifdini, Meysam; Mirhendi, Hossein; Ashrafi, Keyhan; Hosseini, Mostafa; Mohebali, Mehdi; Khodadadi, Hossein; Kia, Eshrat Beigom

    2015-01-01

    This study was performed to evaluate nested polymerase chain reaction (PCR) and real-time PCR methods for detection of Strongyloides stercoralis in fecal samples compared with parasitological methods. A total of 466 stool samples were examined by conventional parasitological methods (formalin ether concentration [FEC] and agar plate culture [APC]). DNA was extracted using an in-house method, and mitochondrial cytochrome c oxidase subunit 1 and 18S ribosomal genes were amplified by nested PCR and real-time PCR, respectively. Among 466 samples, 12.7% and 18.2% were found infected with S. stercoralis by FEC and APC, respectively. DNA of S. stercoralis was detected in 18.9% and 25.1% of samples by real-time PCR and nested PCR, respectively. Considering parasitological methods as the diagnostic gold standard, the sensitivity and specificity of nested PCR were 100% and 91.6%, respectively, and that of real-time PCR were 84.7% and 95.8%, respectively. However, considering sequence analyzes of the selected nested PCR products, the specificity of nested PCR is increased. In general, molecular methods were superior to parasitological methods. They were more sensitive and more reliable in detection of S. stercoralis in comparison with parasitological methods. Between the two molecular methods, the sensitivity of nested PCR was higher than real-time PCR. PMID:26350449

  6. Iohexol plasma clearance measurement in older adults with chronic kidney disease-sampling time matters.

    Science.gov (United States)

    Ebert, Natalie; Loesment, Amina; Martus, Peter; Jakob, Olga; Gaedeke, Jens; Kuhlmann, Martin; Bartel, Jan; Schuchardt, Mirjam; Tölle, Markus; Huang, Tao; van der Giet, Markus; Schaeffner, Elke

    2015-08-01

    Accurate and precise measurement of GFR is important for patients with chronic kidney disease (CKD). Sampling time of exogenous filtration markers may have great impact on measured GFR (mGFR) results, but there is still uncertainty about optimal timing of plasma clearance measurement in patients with advanced CKD, for whom 24-h measurement is recommended. This satellite project of the Berlin Initiative Study evaluates whether 24-h iohexol plasma clearance reveals a clinically relevant difference compared with 5-h measurement in older adults. In 104 participants with a mean age of 79 years and diagnosed CKD, we performed standard GFR measurement over 5 h (mGFR300) using iohexol plasma concentrations at 120, 180, 240 and 300 min after injection. With an additional sample at 1440 min, we assessed 24-h GFR measurement (mGFR1440). Study design was cross-sectional. Calculation of mGFR was conducted with a one compartment model using the Brochner-Mortensen equation to calculate the fast component. mGFR values were compared with estimated GFR values (MDRD, CKD-EPI, BIS1, Revised Lund-Malmö and Cockcroft-Gault). In all 104 subjects, mGFR1440 was lower than mGFR300 (23 ± 8 versus 29 ± 9 mL/min/1.73 m(2), mean ± SD; P clearance up to 5 h leads to a clinically relevant overestimation of GFR compared with 24-h measurement. In clinical care, this effect should be bore in mind especially for patients with considerably reduced GFR levels. A new correction formula has been developed to predict mGFR1440 from mGFR300. For accurate GFR estimates in elderly CKD patients, we recommend the Revised Lund Malmö equation. © The Author 2015. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  7. Multilinear analysis of Time-Resolved Laser-Induced Fluorescence Spectra of U(VI containing natural water samples

    Directory of Open Access Journals (Sweden)

    Višňák Jakub

    2017-01-01

    Full Text Available Natural waters’ uranium level monitoring is of great importance for health and environmental protection. One possible detection method is the Time-Resolved Laser-Induced Fluorescence Spectroscopy (TRLFS, which offers the possibility to distinguish different uranium species. The analytical identification of aqueous uranium species in natural water samples is of distinct importance since individual species differ significantly in sorption properties and mobility in the environment. Samples originate from former uranium mine sites and have been provided by Wismut GmbH, Germany. They have been characterized by total elemental concentrations and TRLFS spectra. Uranium in the samples is supposed to be in form of uranyl(VI complexes mostly with carbonate (CO32− and bicarbonate (HCO3− and to lesser extend with sulphate (SO42− , arsenate (AsO43− , hydroxo (OH− , nitrate (NO3− and other ligands. Presence of alkaline earth metal dications (M = Ca2+ , Mg2+ , Sr2+ will cause most of uranyl to prefer ternary complex species, e.g. Mn(UO2(CO332n-4 (n ∊ {1; 2}. From species quenching the luminescence, Cl− and Fe2+ should be mentioned. Measurement has been done under cryogenic conditions to increase the luminescence signal. Data analysis has been based on Singular Value Decomposition and monoexponential fit of corresponding loadings (for separate TRLFS spectra, the “Factor analysis of Time Series” (FATS method and Parallel Factor Analysis (PARAFAC, all data analysed simultaneously. From individual component spectra, excitation energies T00, uranyl symmetric mode vibrational frequencies ωgs and excitation driven U-Oyl bond elongation ΔR have been determined and compared with quasirelativistic (TDDFT/B3LYP theoretical predictions to cross -check experimental data interpretation.

  8. Multilinear analysis of Time-Resolved Laser-Induced Fluorescence Spectra of U(VI) containing natural water samples

    Science.gov (United States)

    Višňák, Jakub; Steudtner, Robin; Kassahun, Andrea; Hoth, Nils

    2017-09-01

    Natural waters' uranium level monitoring is of great importance for health and environmental protection. One possible detection method is the Time-Resolved Laser-Induced Fluorescence Spectroscopy (TRLFS), which offers the possibility to distinguish different uranium species. The analytical identification of aqueous uranium species in natural water samples is of distinct importance since individual species differ significantly in sorption properties and mobility in the environment. Samples originate from former uranium mine sites and have been provided by Wismut GmbH, Germany. They have been characterized by total elemental concentrations and TRLFS spectra. Uranium in the samples is supposed to be in form of uranyl(VI) complexes mostly with carbonate (CO32- ) and bicarbonate (HCO3- ) and to lesser extend with sulphate (SO42- ), arsenate (AsO43- ), hydroxo (OH- ), nitrate (NO3- ) and other ligands. Presence of alkaline earth metal dications (M = Ca2+ , Mg2+ , Sr2+ ) will cause most of uranyl to prefer ternary complex species, e.g. Mn(UO2)(CO3)32n-4 (n ɛ {1; 2}). From species quenching the luminescence, Cl- and Fe2+ should be mentioned. Measurement has been done under cryogenic conditions to increase the luminescence signal. Data analysis has been based on Singular Value Decomposition and monoexponential fit of corresponding loadings (for separate TRLFS spectra, the "Factor analysis of Time Series" (FATS) method) and Parallel Factor Analysis (PARAFAC, all data analysed simultaneously). From individual component spectra, excitation energies T00, uranyl symmetric mode vibrational frequencies ωgs and excitation driven U-Oyl bond elongation ΔR have been determined and compared with quasirelativistic (TD)DFT/B3LYP theoretical predictions to cross -check experimental data interpretation. Note to the reader: Several errors have been produced in the initial version of this article. This new version published on 23 October 2017 contains all the corrections.

  9. Impact of sampling interval in training data acquisition on intrafractional predictive accuracy of indirect dynamic tumor-tracking radiotherapy.

    Science.gov (United States)

    Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro

    2017-08-01

    To explore the effect of sampling interval of training data acquisition on the intrafractional prediction error of surrogate signal-based dynamic tumor-tracking using a gimbal-mounted linac. Twenty pairs of respiratory motions were acquired from 20 patients (ten lung, five liver, and five pancreatic cancer patients) who underwent dynamic tumor-tracking with the Vero4DRT. First, respiratory motions were acquired as training data for an initial construction of the prediction model before the irradiation. Next, additional respiratory motions were acquired for an update of the prediction model due to the change of the respiratory pattern during the irradiation. The time elapsed prior to the second acquisition of the respiratory motion was 12.6 ± 3.1 min. A four-axis moving phantom reproduced patients' three dimensional (3D) target motions and one dimensional surrogate motions. To predict the future internal target motion from the external surrogate motion, prediction models were constructed by minimizing residual prediction errors for training data acquired at 80 and 320 ms sampling intervals for 20 s, and at 500, 1,000, and 2,000 ms sampling intervals for 60 s using orthogonal kV x-ray imaging systems. The accuracies of prediction models trained with various sampling intervals were estimated based on training data with each sampling interval during the training process. The intrafractional prediction errors for various prediction models were then calculated on intrafractional monitoring images taken for 30 s at the constant sampling interval of a 500 ms fairly to evaluate the prediction accuracy for the same motion pattern. In addition, the first respiratory motion was used for the training and the second respiratory motion was used for the evaluation of the intrafractional prediction errors for the changed respiratory motion to evaluate the robustness of the prediction models. The training error of the prediction model was 1.7 ± 0.7 mm in 3D for all sampling

  10. Relative sensitivity of conventional and real-time PCR assays for detection of SFG Rickettsia in blood and tissue samples from laboratory animals.

    Directory of Open Access Journals (Sweden)

    Galina E Zemtsova

    Full Text Available Studies on the natural transmission cycles of zoonotic pathogens and the reservoir competence of vertebrate hosts require methods for reliable diagnosis of infection in wild and laboratory animals. Several PCR-based applications have been developed for detection of infections caused by Spotted Fever group Rickettsia spp. in a variety of animal tissues. These assays are being widely used by researchers, but they differ in their sensitivity and reliability. We compared the sensitivity of five previously published conventional PCR assays and one SYBR green-based real-time PCR assay for the detection of rickettsial DNA in blood and tissue samples from Rickettsia- infected laboratory animals (n = 87. The real-time PCR, which detected rickettsial DNA in 37.9% of samples, was the most sensitive. The next best were the semi-nested ompA assay and rpoB conventional PCR, which detected as positive 18.4% and 14.9% samples respectively. Conventional assays targeting ompB, gltA and hrtA genes have been the least sensitive. Therefore, we recommend the SYBR green-based real-time PCR as a tool for the detection of rickettsial DNA in animal samples due to its higher sensitivity when compared to more traditional assays.

  11. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  12. Comparisons of Sampling Procedures and Time of Sampling for the Detection of Salmonella in Danish Infected Chicken Flocks Raised in Floor Systems

    Directory of Open Access Journals (Sweden)

    Madsen M

    2002-03-01

    Full Text Available Bacteriological follow-up samples were taken from 41 chicken (Gallus gallus flocks in floor systems, where Salmonella enterica (Salmonella had been detected either directly in bacteriological samples or indirectly by serological samples. Three types of follow-up samples were compared to each other within each flock: 1 5 pairs of socks, analysed as 5 samples, 2 2 pairs of socks, analysed as one sample, and 3 60 faecal samples, analysed as one pooled sample. Agreement between sampling methods was evaluated by the following statistical tests: 'Kappa', 'The adjusted rand', McNemar's test for marginal symmetry, Proportion of agreement P0, P+, P-, and Odds Ratio. The highest agreement was found between the 2 types of sock sampling, while the lowest agreement was found by comparing 60 faecal samples with 5 pairs of socks. Two pairs of socks analysed as one pool appeared to be just as effective in detecting S. enterica as the 60 faecal samples. In broiler flocks, 5 pairs of socks were used both in the routine samples taken at about 3 weeks of age for the establishment of infection of the flock, and as one of the follow-up samples taken shortly before slaughter age, which means that the only notable differences between the 2 sampling rounds were the age of the broilers and of their litter. S. enterica was detected more frequently in samples from broilers about 3 weeks old, than in similar samples taken from broilers a few days prior to slaughter at ca. 33–40 days of age.

  13. Magnetic nanoparticles formed in glasses co-doped with iron and larger radius elements

    OpenAIRE

    Edelman , Irina; Ivanova , Oxana; Ivantsov , Ruslan; Velikanov , D.; Zabluda , V.; Zubavichus , Y.; Veligzhanin , A.; Zaikovskiy , V.; Stepanov , S.; Artemenko , Alla; Curély , Jacques; Kliava , Janis

    2012-01-01

    International audience; A new type of nanoparticle-containing glasses based on borate glasses co-doped with low contents of iron and larger radius elements, Dy, Tb, Gd, Ho, Er, Y, and Bi, is studied. Heat treatment of these glasses results in formation of magnetic nanoparticles, radically changing their physical properties. Transmission electron microscopy and synchrotron radiation-based techniques: x-ray diffraction, extended x-ray absorption fine structure, x-ray absorption near-edge struct...

  14. Sampling effects on the identification of roadkill hotspots: Implications for survey design.

    Science.gov (United States)

    Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António

    2015-10-01

    Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from 60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Guignard, P.A.; Chan, W. (Royal Melbourne Hospital, Parkville (Australia). Dept. of Nuclear Medicine)

    1984-09-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned.

  16. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    International Nuclear Information System (INIS)

    Guignard, P.A.; Chan, W.

    1984-01-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned. (author)

  17. Ventilation efficiency in a low-energy dwelling setting – a parameter study for larger rooms

    NARCIS (Netherlands)

    Dijkstra, D.; Loomans, M.G.L.C.; Hensen, J.L.M.; Cremers, B.E. (Bart)

    2016-01-01

    Mechanical balanced ventilation systems typically is applied in new and renovated dwellings in The Netherlands. The application assumes an adequate ventilation efficiency but this has not been confirmed for larger rooms (e.g. living rooms with kitchen attached). This study investigates ventilation

  18. When larger brains do not have more neurons: Increased numbers of cells are compensated by decreased average cell size across mouse individuals

    Directory of Open Access Journals (Sweden)

    Suzana eHerculano-Houzel

    2015-06-01

    Full Text Available There is a strong trend toward increased brain size in mammalian evolution, with larger brains composed of more and larger neurons than smaller brains across species within each mammalian order. Does the evolution of increased numbers of brain neurons, and thus larger brain size, occur simply through the selection of individuals with more and larger neurons, and thus larger brains, within a population? That is, do individuals with larger brains also have more, and larger, neurons than individuals with smaller brains, such that allometric relationships across species are simply an extension of intraspecific scaling? Here we show that this is not the case across adult male mice of a similar age. Rather, increased numbers of neurons across individuals are accompanied by increased numbers of other cells and smaller average cell size of both types, in a trade-off that explains how increased brain mass does not necessarily ensue. Fundamental regulatory mechanisms thus must exist that tie numbers of neurons to numbers of other cells and to average cell size within individual brains. Finally, our results indicate that changes in brain size in evolution are not an extension of individual variation in numbers of neurons, but rather occur through step changes that must simultaneously increase numbers of neurons and cause cell size to increase, rather than decrease.

  19. A within-sample investigation of test–retest reliability in choice experiment surveys with real economic incentives

    DEFF Research Database (Denmark)

    Mørkbak, Morten Raun; Olsen, Søren Bøye

    2015-01-01

    In this paper, we investigate the level of agreement between respondents' choices in identical choice sets in a test-retest choice experiment for a market good with real economic incentives, thus investigating whether the incentivised CE method can be reliable and stable over time. Besides...... comparing choices, we also test for differences in preferences and error variance when a sample of respondents is given the exact same questionnaire twice, with a time lag of 2 weeks in between. Finally, we examine potential reasons and covariates explaining the level of agreement in choices across the 2...... weeks. Across four different tests, we find very good agreement between the two choice experiments - both with respect to overall choices and with respect to preferences. Furthermore, error variances do not differ significantly between the two surveys. The results also show that the larger the utility...

  20. Comparisons of sampling procedures and time of sampling for the detection of Salmonella in Danish infected chicken flocks raised in floor systems

    DEFF Research Database (Denmark)

    Gradel, K.O.; Andersen, J.; Madsen, M.

    2002-01-01

    other within each flock: 1) 5 pairs of socks, analysed as 5 samples, 2) 2 pairs of socks, analysed as one sample, and 3) 60 faecal samples, analysed as one pooled sample. Agreement between sampling methods was evaluated by the following statistical tests: 'Kappa', 'The adjusted rand', McNemar"s test...... in detecting S. enterica as the 60 faecal samples. In broiler flocks, 5 pairs of socks were used both in the routine samples taken at about 3 weeks of age for the establishment of infection of the flock, and as one of the follow-up samples taken shortly before slaughter age, which means that the only notable...... for marginal symmetry, Proportion of agreement P-0, P-, P-, and Odds Ratio. The highest agreement was found between the 2 types of sock sampling, while the lowest agreement was found by comparing 60 faecal samples with 5 pairs of socks. Two pairs of socks analysed as one pool appeared to be just as effective...

  1. Using long ssDNA polynucleotides to amplify STRs loci in degraded DNA samples

    Science.gov (United States)

    Pérez Santángelo, Agustín; Corti Bielsa, Rodrigo M.; Sala, Andrea; Ginart, Santiago; Corach, Daniel

    2017-01-01

    Obtaining informative short tandem repeat (STR) profiles from degraded DNA samples is a challenging task usually undermined by locus or allele dropouts and peak-high imbalances observed in capillary electrophoresis (CE) electropherograms, especially for those markers with large amplicon sizes. We hereby show that the current STR assays may be greatly improved for the detection of genetic markers in degraded DNA samples by using long single stranded DNA polynucleotides (ssDNA polynucleotides) as surrogates for PCR primers. These long primers allow a closer annealing to the repeat sequences, thereby reducing the length of the template required for the amplification in fragmented DNA samples, while at the same time rendering amplicons of larger sizes suitable for multiplex assays. We also demonstrate that the annealing of long ssDNA polynucleotides does not need to be fully complementary in the 5’ region of the primers, thus allowing for the design of practically any long primer sequence for developing new multiplex assays. Furthermore, genotyping of intact DNA samples could also benefit from utilizing long primers since their close annealing to the target STR sequences may overcome wrong profiling generated by insertions/deletions present between the STR region and the annealing site of the primers. Additionally, long ssDNA polynucleotides might be utilized in multiplex PCR assays for other types of degraded or fragmented DNA, e.g. circulating, cell-free DNA (ccfDNA). PMID:29099837

  2. The Occurrence of Apparent Bilateral Aldosterone Suppression in Adrenal Vein Sampling for Primary Aldosteronism

    Science.gov (United States)

    Shibayama, Yui; Wada, Norio; Naruse, Mitsuhide; Kurihara, Isao; Ito, Hiroshi; Yoneda, Takashi; Takeda, Yoshiyu; Umakoshi, Hironobu; Tsuiki, Mika; Ichijo, Takamasa; Fukuda, Hisashi; Katabami, Takuyuki; Yoshimoto, Takanobu; Ogawa, Yoshihiro; Kawashima, Junji; Ohno, Yuichi; Sone, Masakatsu; Fujita, Megumi; Takahashi, Katsutoshi; Shibata, Hirotaka; Kamemura, Kohei; Fujii, Yuichi; Yamamoto, Koichi; Suzuki, Tomoko

    2018-01-01

    Abstract Context In adrenal venous sampling (AVS) for patients with primary aldosteronism (PA), apparent bilateral aldosterone suppression (ABAS), defined as lower aldosterone/cortisol ratios in the bilateral adrenal veins than that in the inferior vena cava, is occasionally experienced. ABAS is uninterpretable with respect to lateralization of excess aldosterone production. We previously reported that ABAS was not a rare phenomenon and was significantly reduced after adrenocorticotropic hormone (ACTH) administration. Objective To validate the effects of ACTH administration and adding sampling positions in the left adrenal vein on the prevalence of ABAS in the larger Japan Primary Aldosteronism Study. Patients The data from 1689 patients with PA who underwent AVS between January 2006 and October 2016 were studied. All patients in the previous study, the West Japan Adrenal Vein Sampling study, were excluded. Outcome Measurements The prevalence of ABAS was investigated at two sampling positions in the left adrenal vein, the central vein and the common trunk, without and with ACTH administration. Results The prevalence of ABAS with ACTH administration was significantly lower than that without ACTH administration [without ACTH vs with ACTH: 79/440 (18.0%) vs 45/591 (7.6%); P AVS regardless of the sampling position in the left adrenal vein was confirmed in the larger cohort. PMID:29687091

  3. Analysis of techniques of sample attack for soil and mineral analysis

    International Nuclear Information System (INIS)

    Dean, J.R.; Chiu, N.W.

    1985-05-01

    Four methods of sample attack were evaluated in the laboratory for use in the determination of uranium, radium-226, thorium-232, thorium-230, thorium-228, and lead-210. The methods evaluated were (1) KF/pyrosulfate fusion; (2) Sodium carbonate fusion; (3) Nitric, perchloric, hydrofluoric acid digestion; and, (4) combination nitric, perchloric, hydrofluoric acid/pyrosulfate fusion. Five samples were chosen for evaluation; two were mine tailings from Bancroft, Ontario and Beaverlodge, Saskatchewan, one was a synthetic uranium ore-silica mixture and two were soil samples supplied by AECB. The KF/pyrosulfate dissolution procedure was found to be the fastest and, overall, most accurate dissolution method for the analysis of 1-20 samples. For larger numbers of samples the three acid/pyrosulfate fusion combination was shown to have some merit

  4. Larger red-shift in optical emissions obtained from the thin films of globular proteins (BSA, lysozyme) – polyelectrolyte (PAA) complexes

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, Hrishikesh [Physical Sciences Division, Institute of Advanced Study in Science and Technology, Vigyan Path, Paschim Boragaon, Garchuk, Guwahati 781035, Assam (India); Kundu, Sarathi, E-mail: sarathi.kundu@gmail.com [Physical Sciences Division, Institute of Advanced Study in Science and Technology, Vigyan Path, Paschim Boragaon, Garchuk, Guwahati 781035, Assam (India); Basu, Saibal [Solid State Physics Division, Bhabha Atomic Research Centre, Mumbai 400 085 (India)

    2016-09-30

    Graphical abstract: Thin films of protein-polyelectrolyte complexes show larger red-shift in optical emission. - Highlights: • Globular proteins (lysozyme and BSA) and polyelectrolyte (sodium polyacrylic acid) are used to form protein-polyelectrolyte complexes (PPC). • Larger red-shift in optical emission is obtained from the thin films of PPC. • Red-shift is not obtained from the solution of PPC and pure protein thin films. • Larger red-shift from PPC films is due to the energy dissipation as non-radiative form through interactions with nearby atoms. • Red-shift in optical emission is independent on the thickness of the PPC film. - Abstract: Globular proteins (lysozyme and BSA) and polyelectrolyte (sodium polyacrylic acid) are used to form protein-polyelectrolyte complexes (PPC). Out-of-plane structures of ≈30–60 nm thick PPC films and their surface morphologies have been studied by using X-ray reflectivity and atomic force microscopy, whereas optical behaviors of PPC and protein conformations have been studied by using UV–vis, photoluminescence and FTIR spectroscopy respectively. Our study reveals that thin films of PPC show a larger red-shift of 23 and 16 nm in the optical emissions in comparison to that of pure protein whereas bulk PPC show a small blue-shift of ≈3 nm. A small amount of peak-shift is found to occur due to the heat treatment or concentration variation of the polyelectrolyte/protein in bulk solution but cannot produce such film thickness independent larger red-shift. Position of the emission peak remains nearly unchanged with the film thickness. Mechanism for such larger red-shift has been proposed.

  5. Coexistence of Epstein-Barr virus and Parvovirus B19 in tonsillar tissue samples: quantitative measurement by real-time PCR.

    Science.gov (United States)

    Sahiner, Fatih; Gümral, Ramazan; Yildizoğlu, Üzeyir; Babayiğit, Mustafa Alparslan; Durmaz, Abdullah; Yiğit, Nuri; Saraçli, Mehmet Ali; Kubar, Ayhan

    2014-08-01

    In this study, we aimed to investigate the presence and copy number of six different viruses in tonsillar tissue samples removed surgically because of chronic recurrent tonsillitis or chronic obstructive tonsillar hypertrophy. In total, 56 tissue samples (tonsillar core) collected from 44 children and 12 adults were included in this study. The presence of viruses was investigated using a new TaqMan-based quantitative real-time PCR assay. Of the 56 tissue samples, 67.9% (38/56) were positive for at least one of the six viruses. Epstein-Barr virus was the most frequently detected virus, being found in 53.6% (30/56), followed by human Parvovirus B19 21.4% (12/56), human adenovirus 12.5% (7/56), human Cytomegalovirus 5.4% (3/56), BK polyomavirus 1.8% (1/56), and Herpes simplex virus 1.8% (1/56). Precancerous or cancerous changes were not detected in the tonsillar tissue samples by pathologic examination, whereas lymphoid hyperplasia was observed in 24 patients. In contrast to other viruses, B19 virus was present in high copy number in tonsillar tissues. The rates of EBV and B19 virus with high copy number (>500.000 copies/ml) were higher in children than in adults, and a positive relationship was also found between the presence of EBV and the presence of B19 virus with high copy number (P=0.037). It is previously reported that some viral agents are associated with different chronic tonsillar pathologies. In the present study, the presence of B19 virus in tonsillar core samples was investigated quantitatively for the first time, and our data suggests that EBV infections could be associated with B19 virus infections or could facilitate B19 virus replication. However, further detailed studies are needed to clarify this observation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Rapid identification and quantification of Campylobacter coli and Campylobacter jejuni by real-time PCR in pure cultures and in complex samples

    Directory of Open Access Journals (Sweden)

    Denis Martine

    2011-05-01

    Full Text Available Abstract Background Campylobacter spp., especially Campylobacter jejuni (C. jejuni and Campylobacter coli (C. coli, are recognized as the leading human foodborne pathogens in developed countries. Livestock animals carrying Campylobacter pose an important risk for human contamination. Pigs are known to be frequently colonized with Campylobacter, especially C. coli, and to excrete high numbers of this pathogen in their faeces. Molecular tools, notably real-time PCR, provide an effective, rapid, and sensitive alternative to culture-based methods for the detection of C. coli and C. jejuni in various substrates. In order to serve as a diagnostic tool supporting Campylobacter epidemiology, we developed a quantitative real-time PCR method for species-specific detection and quantification of C. coli and C. jejuni directly in faecal, feed, and environmental samples. Results With a sensitivity of 10 genome copies and a linear range of seven to eight orders of magnitude, the C. coli and C. jejuni real-time PCR assays allowed a precise quantification of purified DNA from C. coli and C. jejuni. The assays were highly specific and showed a 6-log-linear dynamic range of quantification with a quantitative detection limit of approximately 2.5 × 102 CFU/g of faeces, 1.3 × 102 CFU/g of feed, and 1.0 × 103 CFU/m2 for the environmental samples. Compared to the results obtained by culture, both C. coli and C. jejuni real-time PCR assays exhibited a specificity of 96.2% with a kappa of 0.94 and 0.89 respectively. For faecal samples of experimentally infected pigs, the coefficients of correlation between the C. coli or C. jejuni real-time PCR assay and culture enumeration were R2 = 0.90 and R2 = 0.93 respectively. Conclusion The C. coli and C. jejuni real-time quantitative PCR assays developed in this study provide a method capable of directly detecting and quantifying C. coli and C. jejuni in faeces, feed, and environmental samples. These assays represent a new

  7. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  8. Exponential Synchronization for Stochastic Neural Networks with Mixed Time Delays and Markovian Jump Parameters via Sampled Data

    Directory of Open Access Journals (Sweden)

    Yingwei Li

    2014-01-01

    Full Text Available The exponential synchronization issue for stochastic neural networks (SNNs with mixed time delays and Markovian jump parameters using sampled-data controller is investigated. Based on a novel Lyapunov-Krasovskii functional, stochastic analysis theory, and linear matrix inequality (LMI approach, we derived some novel sufficient conditions that guarantee that the master systems exponentially synchronize with the slave systems. The design method of the desired sampled-data controller is also proposed. To reflect the most dynamical behaviors of the system, both Markovian jump parameters and stochastic disturbance are considered, where stochastic disturbances are given in the form of a Brownian motion. The results obtained in this paper are a little conservative comparing the previous results in the literature. Finally, two numerical examples are given to illustrate the effectiveness of the proposed methods.

  9. Reproducibility of Serum Potassium Values in Serum From Blood Samples Stored for Increasing Times Prior to Centrifugation and Analysis.

    Science.gov (United States)

    Harper, Aaron; Lu, Chuanyong; Sun, Yi; Garcia, Rafael; Rets, Anton; Alexis, Herol; Saad, Heba; Eid, Ikram; Harris, Loretta; Marshall, Barbara; Tafani, Edlira; Pincus, Matthew R

    2016-05-01

    The goal of this work was to determine if immediate versus postponed centrifugation of samples affects the levels of serum potassium. Twenty participants donated normal venous blood that was collected in four serum separator tubes per donor, each of which was analyzed at 0, 1, 2, or 4 hr on the Siemens Advia 1800 autoanalyzer. Coefficients of variation (CVs) for potassium levels ranged from 0% to 7.6% with a mean of 3 ± 2%. ANOVA testing of the means for all 20 samples showed a P-value of 0.72 (>0.05) indicating that there was no statistically significant difference between the means of the samples at the four time points. Sixteen samples were found to have CVs that were ≤5%. Two samples showed increases of potassium from the reference range to levels higher than the upper reference limit, one of which had a 4-hr value that was within the reference or normal range (3.5-5 mEq/l). Overall, most samples were found to have reproducible levels of serum potassium. Serum potassium levels from stored whole blood collected in serum separator tubes are, for the most part, stable at room temperature for at least 4 hr prior to analysis. However, some samples can exhibit significant fluctuations of values. © 2015 Wiley Periodicals, Inc.

  10. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  11. Comparison of serum pools and oral fluid samples for detection of porcine circovirus type 2 by quantitative real-time PCR in finisher pigs

    DEFF Research Database (Denmark)

    Nielsen, Gitte Blach; Nielsen, Jens Peter; Haugegaard, John

    2018-01-01

    Porcine circovirus type 2 (PCV2) diagnostics in live pigs often involves pooled serum and/or oral fluid samples for group-level determination of viral load by quantitative real-time polymerase chain reaction (qPCR). The purpose of the study was to compare the PCV2 viral load determined by q......PCR of paired samples at the pen level of pools of sera (SP) from 4 to 5 pigs and the collective oral fluid (OF) from around 30 pigs corresponding to one rope put in the same pen. Pigs in pens of 2 finishing herds were sampled by cross-sectional (Herd 1) and cross-sectional with follow-up (Herd 2) study designs....... In Herd 1, 50 sample pairs consisting of SP from 4 to 5 pigs and OF from around 23 pigs were collected. In Herd 2, 65 sample pairs consisting of 4 (SP) and around 30 (OF) pigs were collected 4 times at 3-week intervals. A higher proportion of PCV2-positive pens (86% vs. 80% and 100% vs. 91%) and higher...

  12. Adaptive sampling of AEM transients

    Science.gov (United States)

    Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea

    2016-02-01

    This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.

  13. Testing of candidate non-lethal sampling methods for detection of Renibacterium salmoninarum in juvenile Chinook salmon Oncorhynchus tshawytscha

    Science.gov (United States)

    Elliott, Diane G.; McKibben, Constance L.; Conway, Carla M.; Purcell, Maureen K.; Chase, Dorothy M.; Applegate, Lynn M.

    2015-01-01

    Non-lethal pathogen testing can be a useful tool for fish disease research and management. Our research objectives were to determine if (1) fin clips, gill snips, surface mucus scrapings, blood draws, or kidney biopsies could be obtained non-lethally from 3 to 15 g Chinook salmon Oncorhynchus tshawytscha, (2) non-lethal samples could accurately discriminate between fish exposed to the bacterial kidney disease agent Renibacterium salmoninarum and non-exposed fish, and (3) non-lethal samples could serve as proxies for lethal kidney samples to assess infection intensity. Blood draws and kidney biopsies caused ≥5% post-sampling mortality (Objective 1) and may be appropriate only for larger fish, but the other sample types were non-lethal. Sampling was performed over 21 wk following R. salmoninarum immersion challenge of fish from 2 stocks (Objectives 2 and 3), and nested PCR (nPCR) and real-time quantitative PCR (qPCR) results from candidate non-lethal samples were compared with kidney tissue analysis by nPCR, qPCR, bacteriological culture, enzyme-linked immunosorbent assay (ELISA), fluorescent antibody test (FAT) and histopathology/immunohistochemistry. R. salmoninarum was detected by PCR in >50% of fin, gill, and mucus samples from challenged fish. Mucus qPCR was the only non-lethal assay exhibiting both diagnostic sensitivity and specificity estimates >90% for distinguishing between R. salmoninarum-exposed and non-exposed fish and was the best candidate for use as an alternative to lethal kidney sample testing. Mucus qPCR R. salmoninarum quantity estimates reflected changes in kidney bacterial load estimates, as evidenced by significant positive correlations with kidney R. salmoninaruminfection intensity scores at all sample times and in both fish stocks, and were not significantly impacted by environmentalR. salmoninarum concentrations.

  14. The relative importance of perceptual and memory sampling processes in determining the time course of absolute identification.

    Science.gov (United States)

    Guest, Duncan; Kent, Christopher; Adelman, James S

    2018-04-01

    In absolute identification, the extended generalized context model (EGCM; Kent & Lamberts, 2005, 2016) proposes that perceptual processing determines systematic response time (RT) variability; all other models of RT emphasize response selection processes. In the EGCM-RT the bow effect in RTs (longer responses for stimuli in the middle of the range) occurs because these middle stimuli are less isolated, and as perceptual information is accumulated, the evidence supporting a correct response grows more slowly than for stimuli at the ends of the range. More perceptual information is therefore accumulated in order to increase certainty in response for middle stimuli, lengthening RT. According to the model reducing perceptual sampling time should reduce the size of the bow effect in RT. We tested this hypothesis in 2 pitch identification experiments. Experiment 1 found no effect of stimulus duration on the size of the RT bow. Experiment 2 used multiple short stimulus durations as well as manipulating set size and stimulus spacing. Contrary to EGCM-RT predictions, the bow effect on RTs was large for even very short durations. A new version of the EGCM-RT could only capture this, alongside the effect of stimulus duration on accuracy, by including both a perceptual and a memory sampling process. A modified version of the selective attention, mapping, and ballistic accumulator model (Brown, Marley, Donkin, & Heathcote, 2008) could also capture the data, by assuming psychophysical noise diminishes with increased exposure duration. This modeling suggests systematic variability in RT in absolute identification is largely determined by memory sampling and response selection processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Composite sampling of a Bacillus anthracis surrogate with cellulose sponge surface samplers from a nonporous surface.

    Directory of Open Access Journals (Sweden)

    Jenia A M Tufts

    Full Text Available A series of experiments was conducted to explore the utility of composite-based collection of surface samples for the detection of a Bacillus anthracis surrogate using cellulose sponge samplers on a nonporous stainless steel surface. Two composite-based collection approaches were evaluated over a surface area of 3716 cm2 (four separate 929 cm2 areas, larger than the 645 cm2 prescribed by the standard Centers for Disease Control (CDC and Prevention cellulose sponge sampling protocol for use on nonporous surfaces. The CDC method was also compared to a modified protocol where only one surface of the sponge sampler was used for each of the four areas composited. Differences in collection efficiency compared to positive controls and the potential for contaminant transfer for each protocol were assessed. The impact of the loss of wetting buffer from the sponge sampler onto additional surface areas sampled was evaluated. Statistical tests of the results using ANOVA indicate that the collection of composite samples using the modified sampling protocol is comparable to the collection of composite samples using the standard CDC protocol (p  =  0.261. Most of the surface-bound spores are collected on the first sampling pass, suggesting that multiple passes with the sponge sampler over the same surface may be unnecessary. The effect of moisture loss from the sponge sampler on collection efficiency was not significant (p  =  0.720 for both methods. Contaminant transfer occurs with both sampling protocols, but the magnitude of transfer is significantly greater when using the standard protocol than when the modified protocol is used (p<0.001. The results of this study suggest that composite surface sampling, by either method presented here, could successfully be used to increase the surface area sampled per sponge sampler, resulting in reduced sampling times in the field and decreased laboratory processing cost and turn-around times.

  16. An improved permanent magnet quadrupole design with larger good field region for high intensity proton linacs

    Energy Technology Data Exchange (ETDEWEB)

    Mathew, Jose V., E-mail: josev.mathew@gmail.com; Rao, S.V.L.S.; Krishnagopal, S.; Singh, P.

    2013-11-01

    The Low Energy High Intensity Proton Accelerator (LEHIPA), being developed at the Bhabha Atomic Research Centre (BARC) will produce a 20 MeV, 30 mA, continuous wave (CW) proton beam. At these low velocities, space-charge forces dominate, and could lead to larger beam sizes and beam halos. Hence in the design of the focusing lattice of the LEHIPA drift tube linac (DTL) using permanent magnet quadrupoles (PMQs), a larger good field region is preferred. Here we study, using the two dimensional (2D) and three dimensional (3D) simulation codes PANDIRA and RADIA, four different types of cylindrical PMQ designs: 16-segment trapezoidal Halbach configuration, bullet-nosed geometry and 8- and 16-segment rectangular geometries. The trapezoidal Halbach geometry is used in a variety of accelerators since it provides very high field gradients in small bores, while the bullet-nosed geometry, which is a combination of the trapezoidal and rectangular designs, is used in some DTLs. This study shows that a larger good field region is possible in the 16-segment rectangular design as compared to the Halbach and bullet-nosed designs, making it more attractive for high-intensity proton linacs. An improvement in good-field region by ∼16% over the Halbach design is obtained in the optimized 16-segment rectangular design, although the field gradient is lower by ∼20%. Tolerance studies show that the rectangular segment PMQ design is substantially less sensitive to the easy axis orientation errors and hence will be a better choice for DTLs. -- Highlights: • An improved permanent magnet quadrupole (PMQ) design with larger good field region is proposed. • We investigate four PMQ designs, including the widely used Halbach and bullet nosed designs. • Analytical calculations are backed by 2D as well as 3D numerical solvers, PANDIRA and RADIA. • The optimized 16 segment rectangular PMQ design is identified to exhibit the largest good field region. • The effect of easy axis orientation

  17. An improved permanent magnet quadrupole design with larger good field region for high intensity proton linacs

    International Nuclear Information System (INIS)

    Mathew, Jose V.; Rao, S.V.L.S.; Krishnagopal, S.; Singh, P.

    2013-01-01

    The Low Energy High Intensity Proton Accelerator (LEHIPA), being developed at the Bhabha Atomic Research Centre (BARC) will produce a 20 MeV, 30 mA, continuous wave (CW) proton beam. At these low velocities, space-charge forces dominate, and could lead to larger beam sizes and beam halos. Hence in the design of the focusing lattice of the LEHIPA drift tube linac (DTL) using permanent magnet quadrupoles (PMQs), a larger good field region is preferred. Here we study, using the two dimensional (2D) and three dimensional (3D) simulation codes PANDIRA and RADIA, four different types of cylindrical PMQ designs: 16-segment trapezoidal Halbach configuration, bullet-nosed geometry and 8- and 16-segment rectangular geometries. The trapezoidal Halbach geometry is used in a variety of accelerators since it provides very high field gradients in small bores, while the bullet-nosed geometry, which is a combination of the trapezoidal and rectangular designs, is used in some DTLs. This study shows that a larger good field region is possible in the 16-segment rectangular design as compared to the Halbach and bullet-nosed designs, making it more attractive for high-intensity proton linacs. An improvement in good-field region by ∼16% over the Halbach design is obtained in the optimized 16-segment rectangular design, although the field gradient is lower by ∼20%. Tolerance studies show that the rectangular segment PMQ design is substantially less sensitive to the easy axis orientation errors and hence will be a better choice for DTLs. -- Highlights: • An improved permanent magnet quadrupole (PMQ) design with larger good field region is proposed. • We investigate four PMQ designs, including the widely used Halbach and bullet nosed designs. • Analytical calculations are backed by 2D as well as 3D numerical solvers, PANDIRA and RADIA. • The optimized 16 segment rectangular PMQ design is identified to exhibit the largest good field region. • The effect of easy axis orientation

  18. Real-time PCR and enzyme-linked fluorescent assay methods for detecting Shiga-toxin-producing Escherichia coli in mincemeat samples.

    Science.gov (United States)

    Stefan, A; Scaramagli, S; Bergami, R; Mazzini, C; Barbanera, M; Perelle, S; Fach, P

    2007-03-01

    This work aimed to compare real-time polymerase chain reaction (PCR) with the commercially available enzyme-linked fluorescent assay (ELFA) VIDAS ECOLI O157 for detecting Escherichia coli O157 in mincemeat. In addition, a PCR-based survey on Shiga-toxin-producing E. coli (STEC) in mincemeat collected in Italy is presented. Real-time PCR assays targeting the stx genes and a specific STEC O157 sequence (SILO157, a small inserted locus of STEC O157) were tested for their sensitivity on spiked mincemeat samples. After overnight enrichment, the presence of STEC cells could be clearly determined in the 25 g samples containing 10 bacterial cells, while the addition of five bacteria provided equivocal PCR results with Ct values very close to or above the threshold of 40. The PCR tests proved to be more sensitive than the ELFA-VIDAS ECOLI O157, whose detection level started from 50 bacterial cells/25 g of mincemeat. The occurrence of STEC in 106 mincemeat (bovine, veal) samples collected from September to November 2004 at five different points of sale in Italy (one point of sale in Arezzo, Tuscany, central Italy, two in Mantova, Lombardy, Northern Italy, and two in Bologna, Emilia-Romagna, upper-central Italy) was less than 1%. Contamination by the main STEC O-serogroups representing a major public health concern, including O26, O91, O111, O145, and O157, was not detected. This survey indicates that STEC present in these samples are probably not associated with pathogenesis in humans.

  19. Investigating the factorial structure and availability of work time control in a representative sample of the Swedish working population.

    Science.gov (United States)

    Albrecht, Sophie C; Kecklund, Göran; Tucker, Philip; Leineweber, Constanze

    2016-05-01

    Past research has often neglected the sub-dimensions of work time control (WTC). Moreover, differences in levels of WTC with respect to work and demographic characteristics have not yet been examined in a representative sample. We investigated these matters in a recent sample of the Swedish working population. The study was based on the 2014 data collection of the Swedish Longitudinal Occupational Survey of Health. We assessed the structure of the WTC measure using exploratory and confirmatory factor analysis. Differences in WTC by work and demographic characteristics were examined with independent samplet-tests, one-way ANOVAs and gender-stratified logistic regressions. Best model fit was found for a two-factor structure that distinguished between control over daily hours and control over time off (root mean square error of approximation = 0.06; 95% CI 0.04 to 0.09; Comparative Fit Index (CFI) = 0.99). Women, shift and public-sector workers reported lower control in relation to both factors. Age showed small associations with WTC, while a stronger link was suggested for civil status and family situation. Night, roster and rotating shift work seemed to be the most influential factors on reporting low control over daily hours and time off. Our data confirm the two-dimensional structure underlying WTC, namely the components 'control over daily hours' and 'control over time off'. Women, public-sector and shift workers reported lower levels of control. Future research should examine the public health implications of WTC, in particular whether increased control over daily hours and time off can reduce health problems associated with difficult working-time arrangements. © 2015 the Nordic Societies of Public Health.

  20. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    Science.gov (United States)

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Range-efficient consistent sampling and locality-sensitive hashing for polygons

    DEFF Research Database (Denmark)

    Gudmundsson, Joachim; Pagh, Rasmus

    2017-01-01

    Locality-sensitive hashing (LSH) is a fundamental technique for similarity search and similarity estimation in high-dimensional spaces. The basic idea is that similar objects should produce hash collisions with probability significantly larger than objects with low similarity. We consider LSH for...... or union of a set of preprocessed polygons. Curiously, our consistent sampling method uses transformation to a geometric problem....

  2. The cause of larger local magnitude (Mj) in western Japan

    Science.gov (United States)

    Kawamoto, H.; Furumura, T.

    2017-12-01

    The local magnitude of the Japan Meteorological Agency (JMA) scale (Mj) in Japan sometimes show a significant discrepancy between Mw. The Mj is calculated using the amplitude of the horizontal component of ground displacement recorded by seismometers with the natural period of T0=5 s using Katsumata et al. (2004). A typical example of such a discrepancy in estimating Mj was an overestimation of the 2000 Western Tottori earthquake (Mj=7.3, Mw=6.7; hereafter referred to as event T). In this study, we examined the discrepancy between Mj and Mw for recent large earthquakes occurring in Japan.We found that the most earthquakes with larger Mj (>Mw) occur in western Japan while the earthquakes in northern Japan show reasonable Mj (=Mw). To understand the cause of such larger Mj for western Japan earthquakes we examined the strong motion record from the K-NET and KiK-net network for the event T and other earthquakes for reference. The observed ground displacement record from the event T shows a distinctive Love wave packet in tangential motion with a dominant period of about T=5 s which propagates long distances without showing strong dispersions. On the other hand, the ground motions from the earthquakes in northeastern Japan do not have such surface wave packet, and attenuation of ground motion is significant. Therefore, the overestimation of the Mj for earthquakes in western Japan may be attributed to efficient generation and propagation properties of Love wave probably relating to the crustal structure of western Japan. To explain this, we then conducted a numerical simulation of seismic wave propagation using 3D sedimentary layer model (JIVSM; Koketsu et al., 2012) and the source model of the event T. The result demonstrated the efficient generation of Love wave from the shallow strike-slip source which propagates long distances in western Japan without significant dispersions. On the other hand, the generation of surface wave was not so efficient when using a

  3. Proteoglycan and proteome profiling of central human pulmonary fibrotic tissue utilizing miniaturized sample preparation

    DEFF Research Database (Denmark)

    Malmström, Johan; Larsen, Kristoffer; Hansson, Lennart

    2002-01-01

    -dimensional electrophoresis was interfaced to miniaturized sample preparation techniques using microcapillary extraction. Four protein groups were identified; cytoskeletal, adhesion, scavenger and metabolic proteins. These patient's proteomes showed a high degree of heterogeneity between patients but larger homogeneity...

  4. [Real-time quantification to analyze historical Colombian samples detecting a short fragment of hypervariable region II of mitochondrial DNA].

    Science.gov (United States)

    Pérez, Luz Adriana; Rodríguez, Freddy; Langebaek, Carl Henrik; Groot, Helena

    2016-09-01

    Unlike other molecular biology studies, the analysis of ancient DNA (aDNA) requires special infrastructure and methodological conditions to guarantee the quality of the results. One of the main authenticity criteria is DNA quantification, where quantitative real-time PCR is often used given its sensitivity and specificity. Nevertheless, the implementation of these conditions and methodologies to fulfill authenticity criteria imply higher costs. Objective: To develop a simple and less costly method for mitochondrial DNA quantification suitable for highly degraded samples. Materials and methods: The proposed method is based on the use of mini-primers for the specific amplification of short fragments of mitochondrial DNA. The subsequent purification of these amplified fragments allows a standard curve to be constructed with concentrations in accordance to the state of degradation of the samples. Results: The proposed method successfully detected DNA from ancient samples including bone remains and mummified tissue. DNA inhibitory substances were also detected. Conclusion: The proposed method represents a simpler and cost-effective way to detect low amounts of aDNA, and a tool to differentiate DNA-free samples from samples with inhibitory substances.

  5. The erection of larger windmills in the open countryside - an investigation of the visual effects

    International Nuclear Information System (INIS)

    1996-12-01

    The future use of larger windmills will result in new visual effects. The investigation points out that these effects will be dependent on the main characteristics of the landscape. Windmills with a height of 90 m will be taller than any other element found in the landscape with the exception of some chimneys, masts, etc. It is shown that very tall windmills should not be set up in large dominating groups, that it is important that the towers are slender and that the blades rotate slowly (in order to give a more peaceful effect), if the landscape should not be spoiled. Large windmills dominate an area of 1 - 3 kilometers, but at a distance of 10 - 12 km they can appear to fade away between woods and large buildings etc. Naturally, large windmills will be prominent on heaths and moors, and would not be welcome where there are buildings of cultural interest or where the landscape is under conservation. They could, it is stated, be placed amongst a group of smaller windmills, as this would help to lessen their dominance, but should not be positioned where one type of landscape merges into another, as here they would show up more. Local boundaries should also be taken into consideration. When planning where to locate windmills the overall visual effect over larger areas should be contemplated in addition to the preservation of views of buildings etc. of historical interest. Photographs should be taken of proposed sites so that paper models can be placed so as to produce an idea of the visual effects of erecting larger windmills in various positions in specified areas

  6. Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.

    Science.gov (United States)

    Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle

    2017-11-01

    The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  8. Design and development of a highly sensitive, field portable plasma source instrument for on-line liquid stream monitoring and real-time sample analysis

    International Nuclear Information System (INIS)

    Duan, Yixiang; Su, Yongxuan; Jin, Zhe; Abeln, Stephen P.

    2000-01-01

    The development of a highly sensitive, field portable, low-powered instrument for on-site, real-time liquid waste stream monitoring is described in this article. A series of factors such as system sensitivity and portability, plasma source, sample introduction, desolvation system, power supply, and the instrument configuration, were carefully considered in the design of the portable instrument. A newly designed, miniature, modified microwave plasma source was selected as the emission source for spectroscopy measurement, and an integrated small spectrometer with a charge-coupled device detector was installed for signal processing and detection. An innovative beam collection system with optical fibers was designed and used for emission signal collection. Microwave plasma can be sustained with various gases at relatively low power, and it possesses high detection capabilities for both metal and nonmetal pollutants, making it desirable to use for on-site, real-time, liquid waste stream monitoring. An effective in situ sampling system was coupled with a high efficiency desolvation device for direct-sampling liquid samples into the plasma. A portable computer control system is used for data processing. The new, integrated instrument can be easily used for on-site, real-time monitoring in the field. The system possesses a series of advantages, including high sensitivity for metal and nonmetal elements; in situ sampling; compact structure; low cost; and ease of operation and handling. These advantages will significantly overcome the limitations of previous monitoring techniques and make great contributions to environmental restoration and monitoring. (c)

  9. Is it acceptable to use coagulation plasma samples stored at room temperature and 4°C for 24 hours for additional prothrombin time, activated partial thromboplastin time, fibrinogen, antithrombin, and D-dimer testing?

    Science.gov (United States)

    Rimac, V; Coen Herak, D

    2017-10-01

    Coagulation laboratories are faced on daily basis with requests for additional testing in already analyzed fresh plasma samples. This prompted us to examine whether plasma samples stored at room temperature (RT), and 4°C for 24 hours can be accepted for additional prothrombin time (PT), activated partial thromboplastin time (aPTT), fibrinogen (Fbg), antithrombin (AT), and D-dimer testing. We measured PT, aPTT, Fbg in 50 and AT in 30 plasma samples with normal and pathological values, within 4 hours of blood collection (baseline results) and after 24-hours storage at RT (primary tubes), and 4°C (aliquots). D-dimer stability was investigated in 20 samples stored in primary tubes at 4°C. No statistically significant difference between baseline results and results in samples stored at RT and 4°C was observed for PT (P=.938), aPTT (P=.186), Fbg (P=.962), AT (P=.713), and D-dimers (P=.169). The highest median percentage changes were found for aPTT, being more pronounced for samples stored at 4°C (13.0%) than at RT (8.7%). Plasma samples stored both at RT and 4°C for 24 hours are acceptable for additional PT, Fbg, and AT testing. Plasma samples stored 24 hours in primary tubes at 4°C are suitable for D-dimer testing. © 2017 John Wiley & Sons Ltd.

  10. Testing of Small Graphite Samples for Nuclear Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Julie Chapman

    2010-11-01

    Accurately determining the mechanical properties of small irradiated samples is crucial to predicting the behavior of the overal irradiated graphite components within a Very High Temperature Reactor. The sample size allowed in a material test reactor, however, is limited, and this poses some difficulties with respect to mechanical testing. In the case of graphite with a larger grain size, a small sample may exhibit characteristics not representative of the bulk material, leading to inaccuracies in the data. A study to determine a potential size effect on the tensile strength was pursued under the Next Generation Nuclear Plant program. It focuses first on optimizing the tensile testing procedure identified in the American Society for Testing and Materials (ASTM) Standard C 781-08. Once the testing procedure was verified, a size effect was assessed by gradually reducing the diameter of the specimens. By monitoring the material response, a size effect was successfully identified.

  11. BACTERIAL PROFILES FOR CHRONIC AND AGGRESSIVE PERIODONTITIS IN A SAMPLE POPULATION GROUP. A CROSS-SECTIONAL STUDY

    Directory of Open Access Journals (Sweden)

    Alexandra-Cornelia TEODORESCU

    2017-06-01

    Full Text Available Aim. The study aims at determining some possible significant differences in the subgingival microbial profiles of patients with generalized chronic periodontitis (GCP and generalized aggressive periodontitis (GAP, as a tool in helping with differential diagnostic. Materials and methods. 20 subgingival fluid samples (10 from GAP and 10 from GCP patients were subjected to a Real-Time Polymerase Chain Reaction technique in order to determine the prevalence and the counts of 9 periodontal pathogens (Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis, Treponema denticola, Tanerella forsythia, Prevotella intermedia, Peptostreptococcus micros, Fusobacterium nucleatum, Eubacterium nodatum and Capnocytophaga gingivalis. Results and discussion. Fusobacterium nucleatum was singnificantly correlated with the aggressive periodontitis group, but no significant differences were found for the other 8 periodontal bacteria. Conclusions. The prevalence or count of some periodontal pathogens could help clinicians make an easier differential diagnostic between GCP and GAP, however further studies, conducted on larger population samples, are still needed.

  12. Gabor's expansion and the Zak transform for continuous-time and discrete-time signals : critical sampling and rational oversampling

    NARCIS (Netherlands)

    Bastiaans, M.J.

    1995-01-01

    Gabor's expansion of a signal into a discrete set of shifted and modulated versions of an elementary signal is introduced and its relation to sampling of the sliding-window spectrum is shown. It is shown how Gabor's expansion coefficients can be found as samples of the sliding-window spectrum, where

  13. Spike timing precision of neuronal circuits.

    Science.gov (United States)

    Kilinc, Deniz; Demir, Alper

    2018-04-17

    Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.

  14. Sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage.

    Science.gov (United States)

    Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei

    2018-05-10

    The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Variations of the petrophysical properties of rocks with increasing hydrocarbons content and their implications at larger scale: insights from the Majella reservoir (Italy)

    Science.gov (United States)

    Trippetta, Fabio; Ruggieri, Roberta; Lipparini, Lorenzo

    2016-04-01

    porosity. Preliminary data also suggest a different behaviour at increasing confining pressure for clean and-oil bearing samples: almost perfectly elastic behaviour for oil-bearing samples and more inelastic behaviours for cleaner samples. Thus HC presence appears to contrast the increase of confining pressure acting as semi-fluids, reducing the rock inelastic compaction and enhancing its elastic behaviour. Trying to upscale our rock-physics results, we started from wells and laboratory data on stratigraphy, porosity and Vp in order to simulate the effect of the HC presence at larger scale, using Petrel® software. The developed synthetic model highlights that Vp, which is primarily controlled by porosity, changes significantly within oil-bearing portions, with a notable impact on the velocity model that should be adopted. Moreover we are currently performing laboratory tests in order to evaluate the changes in the elastic parameters with the aim of modelling the effects of the HC on the mechanical behaviour of the involved rocks at larger scale.

  16. Optical properties of tin oxide nanoparticles prepared by laser ablation in water: Influence of laser ablation time duration and laser fluence

    International Nuclear Information System (INIS)

    Desarkar, Himadri Sankar; Kumbhakar, P.; Mitra, A.K.

    2012-01-01

    Colloidal tin oxide nanoparticles are prepared by laser (having a wavelength of 1064 nm) ablation of tin metallic target immersed in pure deionized water. The influences of laser ablation time and laser fluence on the size and optical properties of the synthesized nanoparticles are studied. Prepared tin oxide nanoparticles are characterized by transmission electron microscope, selected area electron diffraction and UV–Visible absorption spectroscopy. The morphology of prepared tin oxide nanoparticles is found to be mostly spherical and with sizes in the nanometric range (mean radius of 3.2 to 7.3 nm). The measured UV–Visible absorption spectra show the presence of absorption peaks in the ultraviolet region. The band gap energy of samples prepared with different laser ablation time duration is calculated and is found to be increased with decrease in size (radius) of the prepared nanoparticles. Photoluminescence emission measurements at room temperature show that all the samples exhibit photoluminescence in the visible region. The peak photoluminescence emission intensity in the sample prepared with 50 min of laser ablation time is 3.5 times larger than that obtained in the sample prepared with 10 min of laser ablation time. - Highlights: ► SnO 2 nanoparticles (6.4–14.6 nm) are prepared by laser ablation in liquid technique. ► The influences of laser ablation time and laser fluence are studied. ► Samples are characterized by TEM and UV–Visible absorption spectroscopy. ► UV–Visible absorption spectra exhibit quantum confinement effect. ► Samples exhibit enhanced photoluminescence emissions in the visible region.

  17. Purchasing innovations in the construction sector in the Netherlands : a comparison between SMEs and larger companies

    NARCIS (Netherlands)

    de Rijk, Melissa

    2015-01-01

    Posterpresentatie Ondernemerschapsmiddag KCO, gehouden op 16 november 2015. Main research question: To what extend does the purchasing activity of incremental and radical innovations of SMEs differ from that of larger companies in the construction sector in the Netherlands?

  18. Theory of sampling and its application in tissue based diagnosis

    Directory of Open Access Journals (Sweden)

    Kayser Gian

    2009-02-01

    Full Text Available Abstract Background A general theory of sampling and its application in tissue based diagnosis is presented. Sampling is defined as extraction of information from certain limited spaces and its transformation into a statement or measure that is valid for the entire (reference space. The procedure should be reproducible in time and space, i.e. give the same results when applied under similar circumstances. Sampling includes two different aspects, the procedure of sample selection and the efficiency of its performance. The practical performance of sample selection focuses on search for localization of specific compartments within the basic space, and search for presence of specific compartments. Methods When a sampling procedure is applied in diagnostic processes two different procedures can be distinguished: I the evaluation of a diagnostic significance of a certain object, which is the probability that the object can be grouped into a certain diagnosis, and II the probability to detect these basic units. Sampling can be performed without or with external knowledge, such as size of searched objects, neighbourhood conditions, spatial distribution of objects, etc. If the sample size is much larger than the object size, the application of a translation invariant transformation results in Kriege's formula, which is widely used in search for ores. Usually, sampling is performed in a series of area (space selections of identical size. The size can be defined in relation to the reference space or according to interspatial relationship. The first method is called random sampling, the second stratified sampling. Results Random sampling does not require knowledge about the reference space, and is used to estimate the number and size of objects. Estimated features include area (volume fraction, numerical, boundary and surface densities. Stratified sampling requires the knowledge of objects (and their features and evaluates spatial features in relation to

  19. Interlaboratory study of DNA extraction from multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR for individual kernel detection system of genetically modified maize.

    Science.gov (United States)

    Akiyama, Hiroshi; Sakata, Kozue; Makiyma, Daiki; Nakamura, Kosuke; Teshima, Reiko; Nakashima, Akie; Ogawa, Asako; Yamagishi, Toru; Futo, Satoshi; Oguchi, Taichi; Mano, Junichi; Kitta, Kazumi

    2011-01-01

    In many countries, the labeling of grains, feed, and foodstuff is mandatory if the genetically modified (GM) organism content exceeds a certain level of approved GM varieties. We previously developed an individual kernel detection system consisting of grinding individual kernels, DNA extraction from the individually ground kernels, GM detection using multiplex real-time PCR, and GM event detection using multiplex qualitative PCR to analyze the precise commingling level and varieties of GM maize in real sample grains. We performed the interlaboratory study of the DNA extraction with multiple ground samples, multiplex real-time PCR detection, and multiplex qualitative PCR detection to evaluate its applicability, practicality, and ruggedness for the individual kernel detection system of GM maize. DNA extraction with multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR were evaluated by five laboratories in Japan, and all results from these laboratories were consistent with the expected results in terms of the commingling level and event analysis. Thus, the DNA extraction with multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR for the individual kernel detection system is applicable and practicable in a laboratory to regulate the commingling level of GM maize grain for GM samples, including stacked GM maize.

  20. Human resource management and career planning in a larger library

    Directory of Open Access Journals (Sweden)

    Jelka Gazvoda

    1997-01-01

    Full Text Available Human resource management is presented as a managerial function which is used to develop potential abilities of the employees to achieve organizational goals.Different perception of the employees is essential - people working in the organization are treated as capital and not as an expenditure. In human resource management the most important view of the employees is their potential growth and professional development, training for acquiring new responsibilities and encouragement for innovation. Library management is becoming more and more complex as the result of introducing new technologies. For this reason libraries need well trained people with potentials to modernize library performance and to overcome the conflict between the traditional organizational culture and the requirements of the modem technologically developed environment. The author presents different techniques of active human resource management, which can be used in larger libraries where an appropriate number of employees exists to realize different programmes with. These are programmes for education, staffing,career planning, stimmulation and reward systems, job redefinition and enrichment,and other forms of internal segmentation.

  1. Use of a large time-compensated scintillation detector in neutron time-of-flight measurements

    International Nuclear Information System (INIS)

    Goodman, C.D.

    1979-01-01

    A scintillator for neutron time-of-flight measurements is positioned at a desired angle with respect to the neutron beam, and as a function of the energy thereof, such that the sum of the transit times of the neutrons and photons in the scintillator are substantially independent of the points of scintillations within the scintillator. Extrapolated zero timing is employed rather than the usual constant fraction timing. As a result, a substantially larger scintillator can be employed that substantially increases the data rate and shortens the experiment time. 3 claims

  2. Critical points of DNA quantification by real-time PCR--effects of DNA extraction method and sample matrix on quantification of genetically modified organisms.

    Science.gov (United States)

    Cankar, Katarina; Stebih, Dejan; Dreo, Tanja; Zel, Jana; Gruden, Kristina

    2006-08-14

    Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to

  3. Assessment of real-time PCR method for detection of EGFR mutation using both supernatant and cell pellet of malignant pleural effusion samples from non-small-cell lung cancer patients.

    Science.gov (United States)

    Shin, Saeam; Kim, Juwon; Kim, Yoonjung; Cho, Sun-Mi; Lee, Kyung-A

    2017-10-26

    EGFR mutation is an emerging biomarker for treatment selection in non-small-cell lung cancer (NSCLC) patients. However, optimal mutation detection is hindered by complications associated with the biopsy procedure, tumor heterogeneity and limited sensitivity of test methodology. In this study, we evaluated the diagnostic utility of real-time PCR using malignant pleural effusion samples. A total of 77 pleural fluid samples from 77 NSCLC patients were tested using the cobas EGFR mutation test (Roche Molecular Systems). Pleural fluid was centrifuged, and separated cell pellets and supernatants were tested in parallel. Results were compared with Sanger sequencing and/or peptide nucleic acid (PNA)-mediated PCR clamping of matched tumor tissue or pleural fluid samples. All samples showed valid real-time PCR results in one or more DNA samples extracted from cell pellets and supernatants. Compared with other molecular methods, the sensitivity of real-time PCR method was 100%. Concordance rate of real-time PCR and Sanger sequencing plus PNA-mediated PCR clamping was 98.7%. We have confirmed that real-time PCR using pleural fluid had a high concordance rate compared to conventional methods, with no failed samples. Our data demonstrated that the parallel real-time PCR testing using supernatant and cell pellet could offer reliable and robust surrogate strategy when tissue is not available.

  4. Baltimore PM2.5 Supersite: highly time-resolved organic compounds--sampling duration and phase distribution--implications for health effects studies.

    Science.gov (United States)

    Rogge, Wolfgang F; Ondov, John M; Bernardo-Bricker, Anna; Sevimoglu, Orhan

    2011-12-01

    As part of the Baltimore PM2.5 Supersite study, intensive three-hourly continuous PM2.5 sampling was conducted for nearly 4 weeks in summer of 2002 and as well in winter of 2002/2003. Close to 120 individual organic compounds have been quantified separately in filter and polyurethane foam (PUF) plug pairs for 17 days for each sampling period. Here, the focus is on (1) describing briefly the new sampling system, (2) discussing filter/PUF plugs breakthrough experiments for semi-volatile compounds, (3) providing insight into phase distribution of semi-volatile organic species, and (4) discussing the impact of air pollution sampling time on human exposure with information on maximum 3- and 24-h averaged ambient concentrations of potentially adverse health effects causing organic pollutants. The newly developed sampling system consisted of five electronically controlled parallel sampling channels that are operated in a sequential mode. Semi-volatile breakthrough experiments were conducted in three separate experiments over 3, 4, and 5 h each using one filter and three PUF plugs. Valuable insight was obtained about the transfer of semi-volatile organic compounds through the sequence of PUF plugs and a cut-off could be defined for complete sampling of semi-volatile compounds on only one filter/PUF plug pair, i.e., the setup finally used during the seasonal PM2.5 sampling campaign. Accordingly, n-nonadecane (C19) with a vapor pressure (vp) of 3.25 × 10(-4) Torr is collected with > 95% on the filter/PUF pair. Applied to phenanthrene, the most abundant the PAH sampled, phenanthrene (vp, 6.2 × 10(-5) Torr) was collected completely in wintertime and correlates very well with three-hourly PM2.5 ambient concentrations. Valuable data on the fractional partitioning for semi-volatile organics as a function of season is provided here and can be used to differentiate the human uptake of an organic pollutant of interest via gas- and particle-phase exposure. Health effects studies

  5. Carbon-14 dating of groundwater under Christchurch, 1976 samples

    International Nuclear Information System (INIS)

    Stewart, M.K.; Brenninkmeijer, C.A.M.; Brown, L.J.

    1986-06-01

    Four samples of groundwater from deep aquifers under Christchurch have been analysed for carbon-14, tritium, oxygen-18 and chemical contents. Interpretation of the carbon-14 results requires two steps, (1) correction of the measured 14 C values for input of dead ( 14 C-free) carbon underground (indicating that the measured values of 80 PMC* should be increased to about 120 PMC), and (2) determination of water residence times for given flow models of the groundwater system. Interpretation of tritium results involves step 2 only. Three models are considered, of which the third is considered most appropriate to Christchurch. In this model, the 14 C and T results indicate that a small proportion of young water (post-1954) mixes with a larger proportion of older water (probably at least several hundred years). The oxygen-18 content indicates that recharge is mainly from the Waimakariri River and possibly from rainfall and streams near the foothills of the Canterbury Plains. Other aspect of the groundwater flow under Christchurch are discussed

  6. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Science.gov (United States)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  7. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Directory of Open Access Journals (Sweden)

    Cina Aghamohammadi

    2018-02-01

    Full Text Available We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r, for any large real number r. Then, for a sequence of processes each labeled by an integer size N, we compare how the classical and quantum required memories scale with N. In this setting, since both memories can diverge as N→∞, the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N→∞, but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  8. Larger, Higher-level Academic Institutions in the US Do Not Necessarily Have Better-resourced Library Web Teams. A Review of: Connell, Ruth Sara.

    Directory of Open Access Journals (Sweden)

    Suzanne Lewis

    2008-12-01

    Full Text Available Objective – To discover how library Web teams’ staffing, backgrounds, tools, and professional development differ among various types of academic libraries.Design – Survey.Setting – Academic libraries in the United States.Subjects – Academic library Web team members.Methods – A systematic sample of every twelfth institution on The Carnegie Classification of Institutions of Higher Education list was used to establish a sample group. A Web search was carriedout to identify each institution’s library Web site and contact information for the Web site designer or most appropriate alternative person. Institutions were excluded from the sample if they had no Web site at all, had no library Web site, had a Web site that did not mention a library, or had a Spanish language Web site. In September 2006 an e-mail was sent to the contact for each institution in the sample group asking them to participate in an online survey. A follow up e-mail was sent two weeks later and the survey closed after one month. The survey respondents were asked to identify their institutions so that analysis of the results in relation to the sizeand type of institution could be carried out. The researchers used a simplified version of the Carnegie classification to sort the responding institutions into five main groups.Main Results – The systematic sample consisted of 288 institutions (sample size 6.5%. The profile of the responding institutions was as follows: associate’s colleges (35.5%, baccalaureate colleges (18.2%, master’s colleges and universities (20.9%, doctorate-granting universities (9% and special focus institutions (15.5%. A total of 110 institutions completed the survey, yielding a response rate of 38.19%, although not all respondents answered all the survey questions. The final sample of 110 was 2.5% of the total 4384 institutions on the Carnegielist. Seventy-one per cent of institutions with multiple libraries shared Web teams, with two

  9. Climatologies from satellite measurements: the impact of orbital sampling on the standard error of the mean

    Directory of Open Access Journals (Sweden)

    M. Toohey

    2013-04-01

    Full Text Available Climatologies of atmospheric observations are often produced by binning measurements according to latitude and calculating zonal means. The uncertainty in these climatological means is characterised by the standard error of the mean (SEM. However, the usual estimator of the SEM, i.e., the sample standard deviation divided by the square root of the sample size, holds only for uncorrelated randomly sampled measurements. Measurements of the atmospheric state along a satellite orbit cannot always be considered as independent because (a the time-space interval between two nearest observations is often smaller than the typical scale of variations in the atmospheric state, and (b the regular time-space sampling pattern of a satellite instrument strongly deviates from random sampling. We have developed a numerical experiment where global chemical fields from a chemistry climate model are sampled according to real sampling patterns of satellite-borne instruments. As case studies, the model fields are sampled using sampling patterns of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS and Atmospheric Chemistry Experiment Fourier-Transform Spectrometer (ACE-FTS satellite instruments. Through an iterative subsampling technique, and by incorporating information on the random errors of the MIPAS and ACE-FTS measurements, we produce empirical estimates of the standard error of monthly mean zonal mean model O3 in 5° latitude bins. We find that generally the classic SEM estimator is a conservative estimate of the SEM, i.e., the empirical SEM is often less than or approximately equal to the classic estimate. Exceptions occur only when natural variability is larger than the random measurement error, and specifically in instances where the zonal sampling distribution shows non-uniformity with a similar zonal structure as variations in the sampled field, leading to maximum sensitivity to arbitrary phase shifts between the sample distribution and

  10. Procrastination, Flow, and Academic Performance in Real Time Using the Experience Sampling Method.

    Science.gov (United States)

    Sumaya, Isabel C; Darling, Emily

    2018-01-01

    The authors' aim was to first provide an alternative methodology in the assessment of procrastination and flow that would not reply on retrospective or prospective self-reports. Using real-time assessment of both procrastination and flow, the authors investigated how these factors impact academic performance by using the Experience Sampling Method. They assessed flow by measuring student self-reported skill versus challenge, and procrastination by measuring the days to completion of an assignment. Procrastination and flow were measured for six days before a writing assignment due date while students (n = 14) were enrolled in a research methods course. Regardless of status of flow, both the nonflow and flow groups showed high levels of procrastination. Students who experienced flow as they worked on their paper, in real time, earned significantly higher grades (M = 3.05 ± 0.30: an average grade of B) as compared with the nonflow group (M = 1.16 ± 0.33: an average grade of D; p = .007). Additionally, students experiencing flow were more accurate in predicting their grade (difference scores, flow M = 0.12 ± 0.33 vs. nonflow M = 1.39 ± 0.29; p = .015). Students in the nonflow group were nearly a grade and a half off in their prediction of their grade on the paper. To the authors' knowledge, the study is the first to provide experimental evidence showing differences in academic performance between students experiencing flow and nonflow students.

  11. A comparison of iron oxide-rich joint coatings and rock chips as geochemical sampling media in exploration for disseminated gold deposits

    Science.gov (United States)

    Crone, W.; Larson, L.T.; Carpenter, R.H.; Chao, T.T.; Sanzolone, R.F.

    1984-01-01

    We evaluated the effectiveness of iron oxide-rich fracture coatings as a geochemical sampling medium for disseminated gold deposits, as compared with conventional lithogeochemical methods, for samples from the Pinson mine and Preble prospect in southeastern Humboldt County, Nevada. That disseminated gold mineralization is associated with Hg, As, and Sb is clearly demonstrated in these deposits for both fracture coatings and rock chip samples. However, the relationship is more pronounced for fracture coatings. Fracture coatings at Pinson contain an average of 3.61, 5.13, 14.37, and 3.42 times more Au, As, Sb and Hg, respectively, than adjacent rock samples. At Preble, fracture coatings contain 3.13, 9.72, 9.18, and 1.85 times more Au, As, Sb and Hg, respectively, than do adjacent rock samples. Geochemical anomalies determined from fracture coatings are thus typically more intense than those determined from rock samples for these elements. The sizes of anomalies indicated by fracture coatings are also somewhat larger, but this is less obvious. In both areas, Sb anomalies are more extensive in fracture coatings. At Preble, some Hg and Au anomalies are also more extensive in fracture coatings. In addition to halos formed by the Hg, As and Sb, high values for Au/Ag and Zn/(Fe + Mn) are closely associated with gold mineralization at the Pinson mine. The large enhancement in geochemical response afforded by fracture coatings indicates a definite potential in the search for buried disseminated gold deposits. ?? 1984.

  12. The Occurrence of Apparent Bilateral Aldosterone Suppression in Adrenal Vein Sampling for Primary Aldosteronism.

    Science.gov (United States)

    Shibayama, Yui; Wada, Norio; Naruse, Mitsuhide; Kurihara, Isao; Ito, Hiroshi; Yoneda, Takashi; Takeda, Yoshiyu; Umakoshi, Hironobu; Tsuiki, Mika; Ichijo, Takamasa; Fukuda, Hisashi; Katabami, Takuyuki; Yoshimoto, Takanobu; Ogawa, Yoshihiro; Kawashima, Junji; Ohno, Yuichi; Sone, Masakatsu; Fujita, Megumi; Takahashi, Katsutoshi; Shibata, Hirotaka; Kamemura, Kohei; Fujii, Yuichi; Yamamoto, Koichi; Suzuki, Tomoko

    2018-05-01

    In adrenal venous sampling (AVS) for patients with primary aldosteronism (PA), apparent bilateral aldosterone suppression (ABAS), defined as lower aldosterone/cortisol ratios in the bilateral adrenal veins than that in the inferior vena cava, is occasionally experienced. ABAS is uninterpretable with respect to lateralization of excess aldosterone production. We previously reported that ABAS was not a rare phenomenon and was significantly reduced after adrenocorticotropic hormone (ACTH) administration. To validate the effects of ACTH administration and adding sampling positions in the left adrenal vein on the prevalence of ABAS in the larger Japan Primary Aldosteronism Study. The data from 1689 patients with PA who underwent AVS between January 2006 and October 2016 were studied. All patients in the previous study, the West Japan Adrenal Vein Sampling study, were excluded. The prevalence of ABAS was investigated at two sampling positions in the left adrenal vein, the central vein and the common trunk, without and with ACTH administration. The prevalence of ABAS with ACTH administration was significantly lower than that without ACTH administration [without ACTH vs with ACTH: 79/440 (18.0%) vs 45/591 (7.6%); P sampling position, at the central vein and at the common trunk [33/591 (5.6%) vs 32/591 (5.4%); P = 1.00]. The effectiveness of ACTH administration for the reduction of ABAS in AVS regardless of the sampling position in the left adrenal vein was confirmed in the larger cohort.

  13. EOCENE LARGER FORAMINIFERAL BIOSTRATIGRAPHY IN THE SOUTHERNMOST DAUPHINOIS DOMAIN (MARITIME ALPS, FRANCE-ITALY BORDER

    Directory of Open Access Journals (Sweden)

    DARIO VARRONE

    2007-07-01

    Full Text Available The Trucco Formation and the Nummulitic Limestone (Dauphinois Domain, Maritime Alps are characterized by abundant larger foraminifera, specifically nummulitids, orthophragminids and encrusting foraminifera. In the Maritime Alps, previous studies suggest a late Lutetian age for the Trucco Formation and a late Lutetian-Priabonian age for the Nummulitic Limestone.Biostratigraphic analysis of the nummulitids, in 11 stratigraphic sections, allowed us to distinguish 3 biozones:MALF1 Zone: defined by the presence of Nummulites brongniarti d’Archiac & Haime, N. puschi d’Archiac, N. perforatus de Montfort, N. striatus (Bruguière, N. cf. dufrenoyi d’Archiac & Haime, N. variolarius/incrassatus and Operculina schwageri Silvestri.MALF2 Zone: defined by the presence of Nummulites perforatus de Montfort, N. striatus (Bruguière, N. cf. dufrenoyi d’Archiac & Haime, N. variolarius/incrassatus and Operculina schwageri Silvestri.MALF 3 Zone: defined by the presence of gr. Nummulites variolarius/incrassatus, N. striatus (Bruguière and Operculina schwageri Silvestri.According to current larger foraminiferal biozonal schemes, the age of these local biozones corresponds to the Bartonian p.p.Moreover, the comparison with biostratigraphic schemes established for the Dauphinois Domain and for the Tethyan area evidences that several typical nummulitid species of the late Bartonian are lacking in the southern Dauphinois Domain, probably due to a paleogeographic control. 

  14. Time-dependent rheoforging of A6061 aluminum alloy on a mechanical servo press and the effects of forming conditions on homogeneity of rheoforged samples

    Directory of Open Access Journals (Sweden)

    Meng Yi

    2015-01-01

    Full Text Available The solid and liquid phases in semisolid metal slurry exhibited different forming behaviours during deformation result in products with inhomogeneous quality. A6061 aluminum alloy was forged in the semisolid state on a mechanical servo press with the capability of multistage compression. To improve the homogeneity of rheoforged samples a time-dependent rheoforging strategy was designed. The distributions of the microstructure and mechanical properties the samples manufactured under various experimental conditions were investigated. The A6061 samples forged in the temperature range from 625 to 628 ∘C with a short holding time of 4 s and the upper die preheated to 300 ∘C exhibited a homogeneous microstructure and mechanical properties. The homogeneity of rheoforged samples resulted from the controllable free motion capability of the mechanical servo press and the adjustable fluidity and viscosity of the semisolid slurry.

  15. Spatial distribution of residence time, microbe and storage volume of groundwater in headwater catchments

    Science.gov (United States)

    Tsujimura, Maki; Ogawa, Mahiro; Yamamoto, Chisato; Sakakibara, Koichi; Sugiyama, Ayumi; Kato, Kenji; Nagaosa, Kazuyo; Yano, Shinjiro

    2017-04-01

    Headwater catchments in mountainous region are the most important recharge area for surface and subsurface waters, and time and stock information of the water is principal to understand hydrological processes in the catchments. Also, a variety of microbes are included in the groundwater and spring water, and those varies in time and space, suggesting that information of microbe could be used as tracer for groundwater flow system. However, there have been few researches to evaluate the relationship among the residence time, microbe and storage volume of the groundwater in headwater catchments. We performed an investigation on age dating using SF6 and CFCs, microbe counting in the spring water, and evaluation of groundwater storage volume based on water budget analysis in 8 regions underlain by different lithology, those are granite, dacite, sedimentary rocks, serpentinite, basalt and volcanic lava all over Japan. We conducted hydrometric measurements and sampling of spring water in base flow conditions during the rainless periods 2015 and 2016 in those regions, and SF6, CFCs, stable isotopic ratios of oxygen-18 and deuterium, inorganic solute concentrations and total number of prokaryotes were determined on all water samples. Residence time of spring water ranged from 0 to 16 years in all regions, and storage volume of the groundwater within topographical watershed was estimated to be 0.1 m to 222 m in water height. The spring with the longer residence time tends to have larger storage volume in the watershed, and the spring underlain by dacite tends to have larger storage volume as compared with that underlain by sand stone and chert. Also, total number of prokaryotes in the spring water ranged from 103 to 105 cells/mL, and the spring tends to show clear increasing of total number of prokaryotes with decreasing of residence time. Thus, we observed a certain relationship among residence time, storage volume and total number of prokaryotes in the spring water, and

  16. Design and relevant sample calculations for a neutral particle energy diagnostic based on time of flight

    Energy Technology Data Exchange (ETDEWEB)

    Cecconello, M

    1999-05-01

    Extrap T2 will be equipped with a neutral particles energy diagnostic based on time of flight technique. In this report, the expected neutral fluxes for Extrap T2 are estimated and discussed in order to determine the feasibility and the limits of such diagnostic. These estimates are based on a 1D model of the plasma. The input parameters of such model are the density and temperature radial profiles of electrons and ions and the density of neutrals at the edge and in the centre of the plasma. The atomic processes included in the model are the charge-exchange and the electron-impact ionization processes. The results indicate that the plasma attenuation length varies from a/5 to a, a being the minor radius. Differential neutral fluxes, as well as the estimated power losses due to CX processes (2 % of the input power), are in agreement with experimental results obtained in similar devices. The expected impurity influxes vary from 10{sup 14} to 10{sup 11} cm{sup -2}s{sup -1}. The neutral particles detection and acquisition systems are discussed. The maximum detectable energy varies from 1 to 3 keV depending on the flight distances d. The time resolution is 0.5 ms. Output signals from the waveform recorder are foreseen in the range 0-200 mV. An 8-bit waveform recorder having 2 MHz sampling frequency and 100K sample of memory capacity is the minimum requirement for the acquisition system 20 refs, 19 figs.

  17. What Is the Best Blood Sampling Time for Metabolic Control of Phenylalanine and Tyrosine Concentrations in Tyrosinemia Type 1 Patients?

    NARCIS (Netherlands)

    van Dam, Esther; Daly, Anne; Venema-Liefaard, Gineke; van Rijn, Margreet; Derks, Terry G J; McKiernan, Patrick J; Heiner-Fokkema, Rebecca; MacDonald, Anita; van Spronsen, Francjan J

    2017-01-01

    BACKGROUND: Treatment of hereditary tyrosinemia type 1 with nitisinone and phenylalanine and tyrosine restricted diet has largely improved outcome, but the best blood sampling time for assessment of metabolic control is not known. AIM: To study diurnal and day-to-day variation of phenylalanine and

  18. Are larger and/or more symmetrical Drosophila melanogaster (Diptera, Drosophilidae males more successful in matings in nature?

    Directory of Open Access Journals (Sweden)

    Sofija Pavković-Lučić

    Full Text Available Are larger and/or more symmetrical Drosophila melanogaster (Diptera, Drosophilidae males more successful in matings in nature? Sexual selection in Drosophila melanogaster, related to body size and fluctuating asymmetry in wing length and number of sex comb teeth in males, was tested in natural conditions. Males collected in copula were significantly larger than those collected as a single, while no difference in mean number of sex comb teeth between copulating and single males was observed. On the other hand, single males had greater asymmetry both for wing length and number of sex comb teeth than their mating counterparts. It looks like that symmetry of these bilateral traits also may play a role in sexual selection in this dipteran species in nature.

  19. Influence of sexual stimulation on sperm parameters in semen samples collected via masturbation from normozoospermic men or cryptozoospermic men participating in an assisted reproduction programme.

    Science.gov (United States)

    Yamamoto, Y; Sofikitis, N; Mio, Y; Miyagawa, I

    2000-05-01

    To evaluate the influence of sexual stimulation via sexually stimulating videotaped visual images (VIM) on sperm function, two semen samples were collected from each of 19 normozoospermic men via masturbation with VIM. Two additional samples were collected from each man via masturbation without VIM. The volume of seminal plasma, total sperm count, sperm motility, percentage of morphologically normal spermatozoa, outcome of hypo-osmotic swelling test and zona-free hamster oocyte sperm penetration assay, and markers of the secretory function of prostate were significantly larger in semen samples collected via masturbation with VIM than masturbation without VIM. The improved sperm parameters in the samples collected via masturbation with VIM may reflect an enhanced prostatic secretory function and increased loading of the vas deferens at that time. In a similar protocol, two semen samples were collected via masturbation with VIM from each of 22 non-obstructed azoospermic men. Semen samples from these men had been occasionally positive in the past for a very small number of spermatozoa (cryptozoospermic men). Two additional samples were collected from each cryptozoospermic man via masturbation without VIM. The volume of seminal plasma, total sperm count, sperm motility, and a marker of the secretory function of prostate were significantly larger in semen samples collected via masturbation with VIM. Fourteen out of the 22 men were negative for spermatozoa in both samples collected via masturbation without VIM. These men demonstrated spermatozoa in both samples collected via masturbation with VIM. Six men with immotile spermatozoa in both samples collected via masturbation without VIM exposed motile spermatozoa in both samples collected via masturbation with VIM. High sexual stimulation during masturbation with VIM results in recovery of spermatozoa of greater fertilizing potential both in normozoospermic and cryptozoospermic men. The appearance of spermatozoa after

  20. Tank 241-C-111 headspace gas and vapor sample results - August 1993 samples

    International Nuclear Information System (INIS)

    Huckaby, J.L.

    1994-01-01

    Tank 241-C-111 is on the ferrocyanide Watch List. Gas and vapor samples were collected to assure safe conditions before planned intrusive work was performed. Sample analyses showed that hydrogen is about ten times higher in the tank headspace than in ambient air. Nitrous oxide is about sixty times higher than ambient levels. The hydrogen cyanide concentration was below 0.04 ppbv, and the average NO x concentration was 8.6 ppmv