WorldWideScience

Sample records for sampled multiple times

  1. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  2. Design and development of multiple sample counting setup

    International Nuclear Information System (INIS)

    Rath, D.P.; Murali, S.; Babu, D.A.R.

    2010-01-01

    Full text: The analysis of active samples on regular basis for ambient air activity and floor contamination from radio chemical lab accounts for major chunk of the operational activity in Health Physicist's responsibility. The requirement for daily air sample analysis on immediate counting and delayed counting from various labs in addition to samples of smear swipe check of lab led to the urge for development of system that could cater multiple sample analysis in a time programmed manner on a single sample loading. A multiple alpha/beta counting system for counting was designed and fabricated. It has arrangements for loading 10 samples in slots in order, get counted in a time programmed manner with results displayed and records maintained in PC. The paper describes the design and development of multiple sample counting setup presently in use at the facility has resulted in reduction of man-hour consumption in counting and recording of the results

  3. Modular time division multiplexer: Efficient simultaneous characterization of fast and slow transients in multiple samples

    Science.gov (United States)

    Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.

    2016-09-01

    A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.

  4. Monte carlo sampling of fission multiplicity.

    Energy Technology Data Exchange (ETDEWEB)

    Hendricks, J. S. (John S.)

    2004-01-01

    Two new methods have been developed for fission multiplicity modeling in Monte Carlo calculations. The traditional method of sampling neutron multiplicity from fission is to sample the number of neutrons above or below the average. For example, if there are 2.7 neutrons per fission, three would be chosen 70% of the time and two would be chosen 30% of the time. For many applications, particularly {sup 3}He coincidence counting, a better estimate of the true number of neutrons per fission is required. Generally, this number is estimated by sampling a Gaussian distribution about the average. However, because the tail of the Gaussian distribution is negative and negative neutrons cannot be produced, a slight positive bias can be found in the average value. For criticality calculations, the result of rejecting the negative neutrons is an increase in k{sub eff} of 0.1% in some cases. For spontaneous fission, where the average number of neutrons emitted from fission is low, the error also can be unacceptably large. If the Gaussian width approaches the average number of fissions, 10% too many fission neutrons are produced by not treating the negative Gaussian tail adequately. The first method to treat the Gaussian tail is to determine a correction offset, which then is subtracted from all sampled values of the number of neutrons produced. This offset depends on the average value for any given fission at any energy and must be computed efficiently at each fission from the non-integrable error function. The second method is to determine a corrected zero point so that all neutrons sampled between zero and the corrected zero point are killed to compensate for the negative Gaussian tail bias. Again, the zero point must be computed efficiently at each fission. Both methods give excellent results with a negligible computing time penalty. It is now possible to include the full effects of fission multiplicity without the negative Gaussian tail bias.

  5. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Science.gov (United States)

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  6. A multiple sampling time projection ionization chamber for nuclear fragment tracking and charge measurement

    International Nuclear Information System (INIS)

    Bauer, G.; Bieser, F.; Brady, F.P.; Chance, J.C.; Christie, W.F.; Gilkes, M.; Lindenstruth, V.; Lynen, U.; Mueller, W.F.J.; Romero, J.L.; Sann, H.; Tull, C.E.; Warren, P.

    1997-01-01

    A detector has been developed for the tracking and charge measurement of the projectile fragment nuclei produced in relativistic nuclear collisions. This device, MUSIC II, is a second generation Multiple Sampling Ionization Chamber (MUSIC), and employs the principles of ionization and time projection chambers. It provides unique charge determination for charges Z≥6, and excellent track position measurement. MUSIC II has been used most recently with the EOS (equation of state) TPC and other EOS collaboration detectors. Earlier it was used with other systems in experiments at the Heavy Ion Superconducting Spectrometer (HISS) facility at Lawrence Berkeley Laboratory and the ALADIN spectrometer at GSI. (orig.)

  7. Development of a real-time multiplex PCR assay for the detection of multiple Salmonella serotypes in chicken samples

    Directory of Open Access Journals (Sweden)

    Whyte Paul

    2008-09-01

    Full Text Available Abstract Background A real-time multiplex PCR assay was developed for the detection of multiple Salmonella serotypes in chicken samples. Poultry-associated serotypes detected in the assay include Enteritidis, Gallinarum, Typhimurium, Kentucky and Dublin. The traditional cultural method according to EN ISO 6579:2002 for the detection of Salmonella in food was performed in parallel. The real-time PCR based method comprised a pre-enrichment step in Buffered Peptone Water (BPW overnight, followed by a shortened selective enrichment in Rappaport Vasilliadis Soya Broth (RVS for 6 hours and subsequent DNA extraction. Results The real-time multiplex PCR assay and traditional cultural method showed 100% inclusivity and 100% exclusivity on all strains tested. The real-time multiplex PCR assay was as sensitive as the traditional cultural method in detecting Salmonella in artificially contaminated chicken samples and correctly identified the serotype. Artificially contaminated chicken samples resulted in a detection limit of between 1 and 10 CFU per 25 g sample for both methods. A total of sixty-three naturally contaminated chicken samples were investigated by both methods and relative accuracy, relative sensitivity and relative specificity of the real-time PCR method were determined to be 89, 94 and 87%, respectively. Thirty cultures blind tested were correctly identified by the real-time multiplex PCR method. Conclusion Real-time PCR methodology can contribute to meet the need for rapid identification and detection methods in food testing laboratories.

  8. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  9. Single- versus multiple-sample method to measure glomerular filtration rate.

    Science.gov (United States)

    Delanaye, Pierre; Flamant, Martin; Dubourg, Laurence; Vidal-Petiot, Emmanuelle; Lemoine, Sandrine; Cavalier, Etienne; Schaeffner, Elke; Ebert, Natalie; Pottel, Hans

    2018-01-08

    There are many different ways to measure glomerular filtration rate (GFR) using various exogenous filtration markers, each having their own strengths and limitations. However, not only the marker, but also the methodology may vary in many ways, including the use of urinary or plasma clearance, and, in the case of plasma clearance, the number of time points used to calculate the area under the concentration-time curve, ranging from only one (Jacobsson method) to eight (or more) blood samples. We collected the results obtained from 5106 plasma clearances (iohexol or 51Cr-ethylenediaminetetraacetic acid (EDTA)) using three to four time points, allowing GFR calculation using the slope-intercept method and the Bröchner-Mortensen correction. For each time point, the Jacobsson formula was applied to obtain the single-sample GFR. We used Bland-Altman plots to determine the accuracy of the Jacobsson method at each time point. The single-sample method showed within 10% concordances with the multiple-sample method of 66.4%, 83.6%, 91.4% and 96.0% at the time points 120, 180, 240 and ≥300 min, respectively. Concordance was poorer at lower GFR levels, and this trend is in parallel with increasing age. Results were similar in males and females. Some discordance was found in the obese subjects. Single-sample GFR is highly concordant with a multiple-sample strategy, except in the low GFR range (<30 mL/min). © The Author 2018. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  10. Interlaboratory study of DNA extraction from multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR for individual kernel detection system of genetically modified maize.

    Science.gov (United States)

    Akiyama, Hiroshi; Sakata, Kozue; Makiyma, Daiki; Nakamura, Kosuke; Teshima, Reiko; Nakashima, Akie; Ogawa, Asako; Yamagishi, Toru; Futo, Satoshi; Oguchi, Taichi; Mano, Junichi; Kitta, Kazumi

    2011-01-01

    In many countries, the labeling of grains, feed, and foodstuff is mandatory if the genetically modified (GM) organism content exceeds a certain level of approved GM varieties. We previously developed an individual kernel detection system consisting of grinding individual kernels, DNA extraction from the individually ground kernels, GM detection using multiplex real-time PCR, and GM event detection using multiplex qualitative PCR to analyze the precise commingling level and varieties of GM maize in real sample grains. We performed the interlaboratory study of the DNA extraction with multiple ground samples, multiplex real-time PCR detection, and multiplex qualitative PCR detection to evaluate its applicability, practicality, and ruggedness for the individual kernel detection system of GM maize. DNA extraction with multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR were evaluated by five laboratories in Japan, and all results from these laboratories were consistent with the expected results in terms of the commingling level and event analysis. Thus, the DNA extraction with multiple ground samples, multiplex real-time PCR, and multiplex qualitative PCR for the individual kernel detection system is applicable and practicable in a laboratory to regulate the commingling level of GM maize grain for GM samples, including stacked GM maize.

  11. Analysis of Product Sampling for New Product Diffusion Incorporating Multiple-Unit Ownership

    Directory of Open Access Journals (Sweden)

    Zhineng Hu

    2014-01-01

    Full Text Available Multiple-unit ownership of nondurable products is an important component of sales in many product categories. Based on the Bass model, this paper develops a new model considering the multiple-unit adoptions as a diffusion process under the influence of product sampling. Though the analysis aims to determine the optimal dynamic sampling effort for a firm and the results demonstrate that experience sampling can accelerate the diffusion process, the best time to send free samples is just before the product being launched. Multiple-unit purchasing behavior can increase sales to make more profit for a firm, and it needs more samples to make the product known much better. The local sensitivity analysis shows that the increase of both external coefficients and internal coefficients has a negative influence on the sampling level, but the internal influence on the subsequent multiple-unit adoptions has little significant influence on the sampling. Using the logistic regression along with linear regression, the global sensitivity analysis gives a whole analysis of the interaction of all factors, which manifests the external influence and multiunit purchase rate are two most important factors to influence the sampling level and net present value of the new product, and presents a two-stage method to determine the sampling level.

  12. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang

    2013-12-06

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower

  13. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  14. Low-sampling-rate M-ary multiple access UWB communications in multipath channels

    KAUST Repository

    Alkhodary, Mohammad T.

    2015-08-31

    The desirable characteristics of ultra-wideband (UWB) technology are challenged by formidable sampling frequency, performance degradation in the presence of multi-user interference, and complexity of the receiver due to the channel estimation process. In this paper, a low-rate-sampling technique is used to implement M-ary multiple access UWB communications, in both the detection and channel estimation stages. A novel approach is used for multiple-access-interference (MAI) cancelation for the purpose of channel estimation. Results show reasonable performance of the proposed receiver for different number of users operating many times below Nyquist rate.

  15. Low-sampling-rate M-ary multiple access UWB communications in multipath channels

    KAUST Repository

    Alkhodary, Mohammad T.; Ballal, Tarig; Al-Naffouri, Tareq Y.; Muqaibel, Ali H.

    2015-01-01

    The desirable characteristics of ultra-wideband (UWB) technology are challenged by formidable sampling frequency, performance degradation in the presence of multi-user interference, and complexity of the receiver due to the channel estimation process. In this paper, a low-rate-sampling technique is used to implement M-ary multiple access UWB communications, in both the detection and channel estimation stages. A novel approach is used for multiple-access-interference (MAI) cancelation for the purpose of channel estimation. Results show reasonable performance of the proposed receiver for different number of users operating many times below Nyquist rate.

  16. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  17. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  18. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  19. Dependability of Data Derived from Time Sampling Methods with Multiple Observation Targets

    Science.gov (United States)

    Johnson, Austin H.; Chafouleas, Sandra M.; Briesch, Amy M.

    2017-01-01

    In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in…

  20. A Fast Multiple Sampling Method for Low-Noise CMOS Image Sensors With Column-Parallel 12-bit SAR ADCs

    Directory of Open Access Journals (Sweden)

    Min-Kyu Kim

    2015-12-01

    Full Text Available This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs. The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.

  1. Multiple Indicator Stationary Time Series Models.

    Science.gov (United States)

    Sivo, Stephen A.

    2001-01-01

    Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…

  2. Ruminal bacteria and protozoa composition, digestibility, and amino acid profile determined by multiple hydrolysis times.

    Science.gov (United States)

    Fessenden, S W; Hackmann, T J; Ross, D A; Foskolos, A; Van Amburgh, M E

    2017-09-01

    Microbial samples from 4 independent experiments in lactating dairy cattle were obtained and analyzed for nutrient composition, AA digestibility, and AA profile after multiple hydrolysis times ranging from 2 to 168 h. Similar bacterial and protozoal isolation techniques were used for all isolations. Omasal bacteria and protozoa samples were analyzed for AA digestibility using a new in vitro technique. Multiple time point hydrolysis and least squares nonlinear regression were used to determine the AA content of omasal bacteria and protozoa, and equivalency comparisons were made against single time point hydrolysis. Formalin was used in 1 experiment, which negatively affected AA digestibility and likely limited the complete release of AA during acid hydrolysis. The mean AA digestibility was 87.8 and 81.6% for non-formalin-treated bacteria and protozoa, respectively. Preservation of microbe samples in formalin likely decreased recovery of several individual AA. Results from the multiple time point hydrolysis indicated that Ile, Val, and Met hydrolyzed at a slower rate compared with other essential AA. Singe time point hydrolysis was found to be nonequivalent to multiple time point hydrolysis when considering biologically important changes in estimated microbial AA profiles. Several AA, including Met, Ile, and Val, were underpredicted using AA determination after a single 24-h hydrolysis. Models for predicting postruminal supply of AA might need to consider potential bias present in postruminal AA flow literature when AA determinations are performed after single time point hydrolysis and when using formalin as a preservative for microbial samples. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Multiple biopsy probe sampling enabled minimally invasive electrical impedance tomography

    International Nuclear Information System (INIS)

    Shini, Mohanad; Rubinsky, Boris

    2008-01-01

    Biopsies are a reliable method for examining tissues and organs inside the body, in particular for detection of tumors. However, a single biopsy produces only limited information on the site from which it is taken. Therefore, tumor detection now employs multiple biopsy samplings to examine larger volumes of tissue. Nevertheless, even with multiple biopsies, the information remains discrete, while the costs of biopsy increase. Here we propose and evaluate the feasibility of using minimally invasive medical imaging as a means to overcome the limitations of discrete biopsy sampling. The minimally invasive medical imaging technique employs the biopsy probe as electrodes for measurements of electrical impedance tomography relevant data during each biopsy sampling. The data from multiple samplings are combined and used to produce an EIT image of the tissue. Two- and three-dimensional mathematical simulations confirm that the minimally invasive medical imaging technique can produce electrical impedance tomography images of the tissues between the biopsy probe insertion sites. We show that these images can detect tumors that would be missed with multiple biopsy samplings only, and that the technique may facilitate the detection of tumors with fewer biopsies, thereby reducing the cost of cancer detection

  4. Efficient computation of the joint sample frequency spectra for multiple populations.

    Science.gov (United States)

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  5. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  6. Manual for the Epithermal Neutron Multiplicity Detector (ENMC) for Measurement of Impure MOX and Plutonium Samples

    International Nuclear Information System (INIS)

    Menlove, H. O.; Rael, C. D.; Kroncke, K. E.; DeAguero, K. J.

    2004-01-01

    We have designed a high-efficiency neutron detector for passive neutron coincidence and multiplicity counting of dirty scrap and bulk samples of plutonium. The counter will be used for the measurement of impure plutonium samples at the JNC MOX fabrication facility in Japan. The counter can also be used to create working standards from bulk process MOX. The detector uses advanced design "3He tubes to increase the efficiency and to shorten the neutron die-away time. The efficiency is 64% and the die-away time is 19.1 ?s. The Epithermal Neutron Multiplicity Counter (ENMC) is designed for high-precision measurements of bulk plutonium samples with diameters of less than 200 mm. The average neutron energy from the sample can be measured using the ratio of the inner ring of He-3 tubes to the outer ring. This report describes the hardware, performance, and calibration for the ENMC.

  7. Extensive monitoring through multiple blood samples in professional soccer players

    DEFF Research Database (Denmark)

    Heisterberg, Mette F; Fahrenkrug, Jan; Krustrup, Peter

    2013-01-01

    of the season. Leucocytes decreased with increased physical training. Lymphocytes decreased at the end of the season. VO2max decreased towards the end of the season whereas no significant changes were observed in the IE2 test.The regular blood samples from elite soccer players reveal significant changes......ABSTRACT: The aim of this study was to make a comprehensive gathering of consecutive detailed blood samples from professional soccer players, and to analyze different blood parameters in relation to seasonal changes in training and match exposure.Blood samples were collected five times during a six...... months period and analyzed for 37 variables in 27 professional soccer players from the best Danish league. Additionally, players were tested for body composition, VO2max and physical performance by the Yo-Yo intermittent endurance sub-max test (IE2).Multiple variations in blood parameters occurred during...

  8. Sensory and Instrumental Flavor Changes in Green Tea Brewed Multiple Times

    Science.gov (United States)

    Lee, Jeehyun; Chambers, Delores; Chambers, Edgar

    2013-01-01

    Green teas in leaf form are brewed multiple times, a common selling point. However, the flavor changes, both sensory and volatile compounds, of green teas that have been brewed multiple times are unknown. The objectives of this study were to determine how the aroma and flavor of green teas change as they are brewed multiple times, to determine if a relationship exists between green tea flavors and green tea volatile compounds, and to suggest the number of times that green tea leaves can be brewed. The first and second brews of the green tea samples provided similar flavor intensities. The third and fourth brews provided milder flavors and lower bitterness and astringency when measured using descriptive sensory analysis. In the brewed liquor of green tea mostly linalool, nonanal, geraniol, jasmone, and β-ionone volatile compounds were present at low levels (using gas chromatography-mass spectrometry). The geraniol, linalool, and linalool oxide compounds in green tea may contribute to the floral/perfumy flavor. Green teas in leaf form may be brewed up to four times: the first two brews providing stronger flavor, bitterness, and astringency whereas the third and fourth brews will provide milder flavor, bitterness, and astringency. PMID:28239138

  9. Honest Importance Sampling with Multiple Markov Chains.

    Science.gov (United States)

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable

  10. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  11. Time evolution of multiple quantum coherences in NMR

    International Nuclear Information System (INIS)

    Sanchez, Claudia M.; Pastawski, Horacio M.; Levstein, Patricia R.

    2007-01-01

    In multiple quantum NMR, individual spins become correlated with one another over time through their dipolar couplings. In this way, the usual Zeeman selection rule can be overcome and forbidden transitions can be excited. Experimentally, these multiple quantum coherences (MQC) are formed by the application of appropriate sequences of radio frequency pulses that force the spins to act collectively. 1 H spin coherences of even order up to 16 were excited in a polycrystalline sample of ferrocene (C 5 H 5 ) 2 Fe and up to 32 in adamantane (C 10 H 16 ) and their evolutions studied in different conditions: (a) under the natural dipolar Hamiltonian, H ZZ (free evolution) and with H ZZ canceled out by (b) time reversion or (c) with the MREV8 sequence. The results show that when canceling H ZZ the coherences decay with characteristic times (τ c ∼200 μs), which are more than one order of magnitude longer than those under free evolution (τ c ∼10 μs). In addition, it is observed that with both MREV8 and time reversion sequences, the higher the order of the coherence (larger number of correlated spins) the faster the speed of degradation, as it happens during the evolution with H ZZ . In both systems, it is observed that the sequence of time reversion of the dipolar Hamiltonian preserves coherences for longer times than MREV8

  12. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  13. Reverse time migration of multiples for OBS data

    KAUST Repository

    Zhang, Dongliang

    2014-01-01

    Reverse time migration of multiples (RTMM) is applied to OBS data with sparse receiver spacing. RTMM for OBS data unlike a marine streamer acquisition is implemented in the common receiver gathers (CRG) and provides a wider and denser illumination for each CRG than the conventional RTM of primaries. Hence, while the the conventional RTM image contains strong aliasing artifacts due to a sparser receiver interval, the RTMM image suffers from this artifacts less. This benefit of RTMM is demonstrated with numerical test on the Marmousi model for sparsely sampled OBS data.

  14. Reverse time migration of multiples for OBS data

    KAUST Repository

    Zhang, Dongliang

    2014-08-05

    Reverse time migration of multiples (RTMM) is applied to OBS data with sparse receiver spacing. RTMM for OBS data unlike a marine streamer acquisition is implemented in the common receiver gathers (CRG) and provides a wider and denser illumination for each CRG than the conventional RTM of primaries. Hence, while the the conventional RTM image contains strong aliasing artifacts due to a sparser receiver interval, the RTMM image suffers from this artifacts less. This benefit of RTMM is demonstrated with numerical test on the Marmousi model for sparsely sampled OBS data.

  15. Multiple time scale dynamics

    CERN Document Server

    Kuehn, Christian

    2015-01-01

    This book provides an introduction to dynamical systems with multiple time scales. The approach it takes is to provide an overview of key areas, particularly topics that are less available in the introductory form.  The broad range of topics included makes it accessible for students and researchers new to the field to gain a quick and thorough overview. The first of its kind, this book merges a wide variety of different mathematical techniques into a more unified framework. The book is highly illustrated with many examples and exercises and an extensive bibliography. The target audience of this  book are senior undergraduates, graduate students as well as researchers interested in using the multiple time scale dynamics theory in nonlinear science, either from a theoretical or a mathematical modeling perspective. 

  16. Time functions function best as functions of multiple times

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1992-01-01

    This article presents an elegant way of representing control functions at an abstractlevel. It introduces time functions that have multiple times as arguments. In this waythe generalized concept of a time function can support absolute and relative kinds of time behavior. Furthermore the

  17. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  18. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  19. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang; Schuster, Gerard T.

    2013-01-01

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual

  1. A neutron multiplicity analysis method for uranium samples with liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Hao, E-mail: zhouhao_ciae@126.com [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China); Lin, Hongtao [Xi' an Reasearch Institute of High-tech, Xi' an, Shaanxi 710025 (China); Liu, Guorong; Li, Jinghuai; Liang, Qinglei; Zhao, Yonggang [China Institute of Atomic Energy, P.O.BOX 275-8, Beijing 102413 (China)

    2015-10-11

    A new neutron multiplicity analysis method for uranium samples with liquid scintillators is introduced. An active well-type fast neutron multiplicity counter has been built, which consists of four BC501A liquid scintillators, a n/γdiscrimination module MPD-4, a multi-stop time to digital convertor MCS6A, and two Am–Li sources. A mathematical model is built to symbolize the detection processes of fission neutrons. Based on this model, equations in the form of R=F*P*Q*T could be achieved, where F indicates the induced fission rate by interrogation sources, P indicates the transfer matrix determined by multiplication process, Q indicates the transfer matrix determined by detection efficiency, T indicates the transfer matrix determined by signal recording process and crosstalk in the counter. Unknown parameters about the item are determined by the solutions of the equations. A {sup 252}Cf source and some low enriched uranium items have been measured. The feasibility of the method is proven by its application to the data analysis of the experiments.

  2. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    Science.gov (United States)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  3. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, Dongliang

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.

  4. Single start multiple stop time digitizer

    International Nuclear Information System (INIS)

    Deshpande, P.A.; Mukhopadhyay, P.K.; Gopalakrishnan, K.R.

    1997-01-01

    A single start multiple stop time digitizer has been developed which can digitize the time between a start pulse and multiple stop pulses. The system has been designed as a PC add on card. The resolution of the instrument is 10 nSecs and the maximum length of time that it can measure is 1.28 milliseconds. Apart from time digitization, it can also resolve the height of the incoming pulses into 64 levels. After each input pulse the system dead time is less than 300 nSecs. The driver software for this card has been developed on DOS platform. It uses graphical user interface to provide a user friendly environment. The system is intended to be used in time of flight mass spectroscopy experiments. It can also be used for time of flight experiments in nuclear physics. (author). 2 figs

  5. Field evaluation of personal sampling methods for multiple bioaerosols.

    Science.gov (United States)

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  6. Least squares reverse time migration of controlled order multiples

    Science.gov (United States)

    Liu, Y.

    2016-12-01

    Imaging using the reverse time migration of multiples generates inherent crosstalk artifacts due to the interference among different order multiples. Traditionally, least-square fitting has been used to address this issue by seeking the best objective function to measure the amplitude differences between the predicted and observed data. We have developed an alternative objective function by decomposing multiples into different orders to minimize the difference between Born modeling predicted multiples and specific-order multiples from observational data in order to attenuate the crosstalk. This method is denoted as the least-squares reverse time migration of controlled order multiples (LSRTM-CM). Our numerical examples demonstrated that the LSRTM-CM can significantly improve image quality compared with reverse time migration of multiples and least-square reverse time migration of multiples. Acknowledgments This research was funded by the National Nature Science Foundation of China (Grant Nos. 41430321 and 41374138).

  7. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  8. Field evaluation of personal sampling methods for multiple bioaerosols.

    Directory of Open Access Journals (Sweden)

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  9. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  10. Time-domain multiple-quantum NMR

    International Nuclear Information System (INIS)

    Weitekamp, D.P.

    1982-11-01

    The development of time-domain multiple-quantum nuclear magnetic resonance is reviewed through mid 1982 and some prospects for future development are indicated. Particular attention is given to the problem of obtaining resolved, interpretable, many-quantum spectra for anisotropic magnetically isolated systems of coupled spins. New results are presented on a number of topics including the optimization of multiple-quantum-line intensities, analysis of noise in two-dimensional spectroscopy, and the use of order-selective excitation for cross polarization between nuclear-spin species

  11. Extensive monitoring through multiple blood samples in professional soccer players.

    Science.gov (United States)

    Heisterberg, Mette F; Fahrenkrug, Jan; Krustrup, Peter; Storskov, Anders; Kjær, Michael; Andersen, Jesper L

    2013-05-01

    The aim of this study was to make a comprehensive gathering of consecutive detailed blood samples from professional soccer players and to analyze different blood parameters in relation to seasonal changes in training and match exposure. Blood samples were collected 5 times during a 6-month period and analyzed for 37 variables in 27 professional soccer players from the best Danish league. Additionally, the players were tested for body composition, V[Combining Dot Above]O2max and physical performance by the Yo-Yo intermittent endurance submax test (IE2). Multiple variations in blood parameters occurred during the observation period, including a decrease in hemoglobin and an increase in hematocrit as the competitive season progressed. Iron and transferrin were stable, whereas ferritin showed a decrease at the end of the season. The immunoglobulin A (IgA) and IgM increased in the period with basal physical training and at the end of the season. Leucocytes decreased with increased physical training. Lymphocytes decreased at the end of the season. The V[Combining Dot Above]O2max decreased toward the end of the season, whereas no significant changes were observed in the IE2 test. The regular blood samples from elite soccer players reveal significant changes that may be related to changes in training pattern, match exposure, or length of the match season. Especially the end of the preparation season and at the end of the competitive season seem to be time points were the blood-derived values indicate that the players are under excessive physical strain and might be more subjected to a possible overreaching-overtraining conditions. We suggest that regular analyses of blood samples could be an important initiative to optimize training adaptation, training load, and game participation, but sampling has to be regular, and a database has to be built for each individual player.

  12. Improving neutron multiplicity counting for the spatial dependence of multiplication: Results for spherical plutonium samples

    Energy Technology Data Exchange (ETDEWEB)

    Göttsche, Malte, E-mail: malte.goettsche@physik.uni-hamburg.de; Kirchner, Gerald

    2015-10-21

    The fissile mass deduced from a neutron multiplicity counting measurement of high mass dense items is underestimated if the spatial dependence of the multiplication is not taken into account. It is shown that an appropriate physics-based correction successfully removes the bias. It depends on four correction coefficients which can only be exactly determined if the sample geometry and composition are known. In some cases, for example in warhead authentication, available information on the sample will be very limited. MCNPX-PoliMi simulations have been performed to obtain the correction coefficients for a range of spherical plutonium metal geometries, with and without polyethylene reflection placed around the spheres. For hollow spheres, the analysis shows that the correction coefficients can be approximated with high accuracy as a function of the sphere's thickness depending only slightly on the radius. If the thickness remains unknown, less accurate estimates of the correction coefficients can be obtained from the neutron multiplication. The influence of isotopic composition is limited. The correction coefficients become somewhat smaller when reflection is present.

  13. Multiple Smaller Missions as a Direct Pathway to Mars Sample Return

    Science.gov (United States)

    Niles, P. B.; Draper, D. S.; Evans, C. A.; Gibson, E. K.; Graham, L. D.; Jones, J. H.; Lederer, S. M.; Ming, D.; Seaman, C. H.; Archer, P. D.; hide

    2012-01-01

    Recent discoveries by the Mars Exploration Rovers, Mars Express, Mars Odyssey, and Mars Reconnaissance Orbiter spacecraft include multiple, tantalizing astrobiological targets representing both past and present environments on Mars. The most desirable path to Mars Sample Return (MSR) would be to collect and return samples from that site which provides the clearest examples of the variety of rock types considered a high priority for sample return (pristine igneous, sedimentary, and hydrothermal). Here we propose an MSR architecture in which the next steps (potentially launched in 2018) would entail a series of smaller missions, including caching, to multiple landing sites to verify the presence of high priority sample return targets through in situ analyses. This alternative architecture to one flagship-class sample caching mission to a single site would preserve a direct path to MSR as stipulated by the Planetary Decadal Survey, while permitting investigation of diverse deposit types and providing comparison of the site of returned samples to other aqueous environments on early Mars

  14. Chromosomal radiosensitivity of human leucocytes in relation to sampling time

    International Nuclear Information System (INIS)

    Buul, P.P.W. van; Natarajan, A.T.

    1980-01-01

    Frequencies of chromosomal aberrations after irradiation with X-rays of peripheral blood lymphocytes in vitro were determined at different times after initiation of cultures. In each culture, the kinetics of cell multiplication was followed by using BrdU labelling and differential staining of chromosomes. The results indicate that the mixing up of first and second cell cycle cells at later sampling times cannot explain the observed variation in the frequencies of chromosomal aberrations but that donor-to-donor variation is a predominant factor influencing yields of aberrations. The condition of a donor seems to be most important because repeats on the same donor also showed marked variability. (orig.)

  15. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, Dongliang; Zhan, Ge; Dai, Wei; Schuster, Gerard T.

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated

  16. A longitudinal field multiple sampling ionization chamber for RIBLL2

    International Nuclear Information System (INIS)

    Tang Shuwen; Ma Peng; Lu Chengui; Duan Limin; Sun Zhiyu; Yang Herun; Zhang Jinxia; Hu Zhengguo; Xu Shanhu

    2012-01-01

    A longitudinal field MUltiple Sampling Ionization Chamber (MUSIC), which makes multiple measurements of energy loss for very high energy heavy ions at RIBLL2, has been constructed and tested with 3 constituent α source ( 239 Pu : 3.435 MeV, 241 Am : 3.913 MeV, 244 Cm : 4.356 MeV). The voltage plateau curve has been plotted and-500 V is determined as a proper work voltage. The energy resolution is 271.4 keV FWHM for the sampling unit when 3.435 MeV energy deposited. A Geant4 Monte Carlo simulation is made and it indicates the detector can provide unique particle identification for ions Z≥4. (authors)

  17. Practical reporting times for environmental samples

    International Nuclear Information System (INIS)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4 degrees C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions

  18. Practical reporting times for environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4{degrees}C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions.

  19. Effects of (α,n) contaminants and sample multiplication on statistical neutron correlation measurements

    International Nuclear Information System (INIS)

    Dowdy, E.J.; Hansen, G.E.; Robba, A.A.; Pratt, J.C.

    1980-01-01

    The complete formalism for the use of statistical neutron fluctuation measurements for the nondestructive assay of fissionable materials has been developed. This formalism includes the effect of detector deadtime, neutron multiplicity, random neutron pulse contributions from (α,n) contaminants in the sample, and the sample multiplication of both fission-related and background neutrons

  20. Multiple output timing and trigger generator

    Energy Technology Data Exchange (ETDEWEB)

    Wheat, Robert M. [Los Alamos National Laboratory; Dale, Gregory E [Los Alamos National Laboratory

    2009-01-01

    In support of the development of a multiple stage pulse modulator at the Los Alamos National Laboratory, we have developed a first generation, multiple output timing and trigger generator. Exploiting Commercial Off The Shelf (COTS) Micro Controller Units (MCU's), the timing and trigger generator provides 32 independent outputs with a timing resolution of about 500 ns. The timing and trigger generator system is comprised of two MCU boards and a single PC. One of the MCU boards performs the functions of the timing and signal generation (the timing controller) while the second MCU board accepts commands from the PC and provides the timing instructions to the timing controller. The PC provides the user interface for adjusting the on and off timing for each of the output signals. This system provides 32 output or timing signals which can be pre-programmed to be in an on or off state for each of 64 time steps. The width or duration of each of the 64 time steps is programmable from 2 {micro}s to 2.5 ms with a minimum time resolution of 500 ns. The repetition rate of the programmed pulse train is only limited by the time duration of the programmed event. This paper describes the design and function of the timing and trigger generator system and software including test results and measurements.

  1. Identification of driving network of cellular differentiation from single sample time course gene expression data

    Science.gov (United States)

    Chen, Ye; Wolanyk, Nathaniel; Ilker, Tunc; Gao, Shouguo; Wang, Xujing

    Methods developed based on bifurcation theory have demonstrated their potential in driving network identification for complex human diseases, including the work by Chen, et al. Recently bifurcation theory has been successfully applied to model cellular differentiation. However, there one often faces a technical challenge in driving network prediction: time course cellular differentiation study often only contains one sample at each time point, while driving network prediction typically require multiple samples at each time point to infer the variation and interaction structures of candidate genes for the driving network. In this study, we investigate several methods to identify both the critical time point and the driving network through examination of how each time point affects the autocorrelation and phase locking. We apply these methods to a high-throughput sequencing (RNA-Seq) dataset of 42 subsets of thymocytes and mature peripheral T cells at multiple time points during their differentiation (GSE48138 from GEO). We compare the predicted driving genes with known transcription regulators of cellular differentiation. We will discuss the advantages and limitations of our proposed methods, as well as potential further improvements of our methods.

  2. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  3. Accelerated Enveloping Distribution Sampling: Enabling Sampling of Multiple End States while Preserving Local Energy Minima.

    Science.gov (United States)

    Perthold, Jan Walther; Oostenbrink, Chris

    2018-05-17

    Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.

  4. Variable dead time counters. 1 - theoretical responses and the effects of neutron multiplication

    International Nuclear Information System (INIS)

    Lees, E.W.; Hooton, B.W.

    1978-10-01

    A theoretical expression is derived for calculating the response of any variable dead time counter (VDC) used in the passive assay of plutonium by neutron counting of the natural spontaneous fission activity. The effects of neutron multiplication in the sample arising from interactions of the original spontaneous fission neutrons is shown to modify the linear relationship between VDC signal and Pu mass. Numerical examples are shown for the Euratom VDC and a systematic investigation of the various factors affecting neutron multiplication is reported. Limited comparisons between the calculations and experimental data indicate provisional validity of the calculations. (author)

  5. Multiple sampling ionization chamber (MUSIC) for measuring the charge of relativistic heavy ions

    Energy Technology Data Exchange (ETDEWEB)

    Christie, W.B.; Romero, J.L.; Brady, F.P.; Tull, C.E.; Castaneda, C.M.; Barasch, E.F.; Webb, M.L.; Drummond, J.R.; Crawford, H.J.; Flores, I.

    1987-04-01

    A large area (1 m x 2 m) multiple sampling ionization chamber (MUSIC) has been constructed and tested. The MUSIC detector makes multiple measurements of energy 'loss', dE/dx, for a relativistic heavy ion. Given the velocity, the charge of the ion can be extracted from the energy loss distributions. The widths of the distributions we observe are much narrower than predicted by Vavilov's theory for energy loss while agreeing well with the theory of Badhwar which deals with the energy deposited. The versatile design of MUSIC allows a variety of anode configurations which results in a large dynamic range of charge. In our tests to date we have observed charge resolutions of 0.25e fwhm for 727 MeV/nucleon /sup 40/Ar and 0.30e fwhm for 1.08 GeV/nucleon /sup 139/La and /sup 139/La fragments. Vertical position and multiple track determination are obtained by using time projection chamber electronics. Preliminary tests indicate that the position resolution is also very good with sigmaapprox. =100 ..mu..m.

  6. The gingival vein as a minimally traumatic site for multiple blood sampling in guinea pigs and hamsters.

    Science.gov (United States)

    Rodrigues, Mariana Valotta; de Castro, Simone Oliveira; de Albuquerque, Cynthia Zaccanini; Mattaraia, Vânia Gomes de Moura; Santoro, Marcelo Larami

    2017-01-01

    Laboratory animals are still necessary in scientific investigation and vaccine testing, but while novel methodological approaches are not available for their replacement, the search for new, humane, easy, and painless methods is necessary to diminish their stress and pain. When multiple blood samples are to be collected from hamsters and guinea pigs, the number of available venipuncture sites-which are greatly diminished in these species in comparison with other rodents due to the absence of a long tail-, harasses animal caregivers and researchers. Thus, this study aimed to evaluate if gingival vein puncture could be used as an additional route to obtain multiple blood samples from anesthetized hamsters and guinea pigs in such a way that animal behavior, well-being or hematological parameters would not be altered. Thus, twelve anesthetized Syrian golden hamsters and English guinea pigs were randomly allocated in two groups: a control group, whose blood samples were not collected, and an experimental group in which blood samples (200 microliters) were collected by gingival vein puncture at weekly intervals over six weeks. Clinical assessment, body weight gain and complete blood cell count were evaluated weekly, and control and experimental animals were euthanized at week seven, when the mentolabial region was processed to histological analyses. Multiple blood sampling from the gingival vein evoked no clinical manifestations or alteration in the behavioral repertoire, nor a statistically significant difference in weight gain in both species. Guinea pigs showed no alteration in red blood cell, leukocyte or platelet parameters over time. Hamsters developed a characteristic pattern of age-related physiological changes, which were considered normal. Histological analyses showed no difference in morphological structures in the interdental gingiva, confirming that multiple blood sampling is barely traumatic. Thus, these results evidence that blood collection from multiple

  7. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  8. Multiple Time-Step Dual-Hamiltonian Hybrid Molecular Dynamics - Monte Carlo Canonical Propagation Algorithm.

    Science.gov (United States)

    Chen, Yunjie; Kale, Seyit; Weare, Jonathan; Dinner, Aaron R; Roux, Benoît

    2016-04-12

    A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method.

  9. Time evolution and use of multiple times in the N-body problem

    International Nuclear Information System (INIS)

    McGuire, J.H.; Godunov, A.L.

    2003-01-01

    Under certain conditions it is possible to describe time evolution using different times for different particles. Use of multiple times is optional in the independent particle approximation, where interparticle interactions are removed, and the N-particle evolution operator factors into N single-particle evolution operators. In this limit one may use either a single time, with a single energy-time Fourier transform, or N different times with a different energy-time transform for each particle. The use of different times for different particles is fully justified when coherence between single-particle amplitudes is lost, e.g., if relatively strong randomly fluctuating residual fields influence each particle independently. However, when spatial correlation is present the use of multiple times is not feasible, even when the evolution of the particles is uncorrelated in time. Some calculations in simple atomic systems with and without spatial and temporal correlation between different electrons are included

  10. Sampling plans in attribute mode with multiple levels of precision

    International Nuclear Information System (INIS)

    Franklin, M.

    1986-01-01

    This paper describes a method for deriving sampling plans for nuclear material inventory verification. The method presented is different from the classical approach which envisages two levels of measurement precision corresponding to NDA and DA. In the classical approach the precisions of the two measurement methods are taken as fixed parameters. The new approach is based on multiple levels of measurement precision. The design of the sampling plan consists of choosing the number of measurement levels, the measurement precision to be used at each level and the sample size to be used at each level

  11. One-dimensional multiple-well oscillators: A time-dependent

    Indian Academy of Sciences (India)

    ... quantum mechanical multiple-well oscillators. An imaginary-time evolution technique, coupled with the minimization of energy expectation value to reach a global minimum, subject to orthogonality constraint (for excited states) has been employed. Pseudodegeneracy in symmetric, deep multiple-well potentials, probability ...

  12. A New Time-Hopping Multiple Access Communication System Simulator: Application to Ultra-Wideband

    Directory of Open Access Journals (Sweden)

    José M. Páez-Borrallo

    2005-03-01

    Full Text Available Time-hopping ultra-wideband technology presents some very attractive features for future indoor wireless systems in terms of achievable transmission rate and multiple access capabilities. This paper develops an algorithm to design time-hopping system simulators specially suitable for ultra-wideband, which takes advantage of some of the specific characteristics of this kind of systems. The algorithm allows an improvement of both the time capabilities and the achievable sampling rate and can be used to research into the influence of different parameters on the performance of the system. An additional result is the validation of a new general performance formula for time-hopping ultra-wideband systems with multipath channels.

  13. A multiple sampling ionization chamber (MUSIC) for measuring the charge of relativistic heavy ions

    International Nuclear Information System (INIS)

    Christie, W.B.; Romero, J.L.; Brady, F.P.; Tull, C.E.; Castaneda, C.M.; Barasch, E.F.; Webb, M.L.; Drummond, J.R.; Sann, H.; Young, J.C.

    1987-01-01

    A large area (1 m x 2 m) multiple sampling ionization chamber (MUSIC) has been constructed and tested. The MUSIC detector makes multiple measurements of energy 'loss', dE/dx, for a relativistic heavy ion. Given the velocity, the charge of the ion can be extracted from the energy loss distributions. The widths of the distributions we observe are much narrower than predicted by Vavilov's theory for energy loss while agreeing well with the theory of Badhwar which deals with the energy deposited. The versatile design of MUSIC allows a variety of anode configurations which results in a large dynamic range of charge. In our tests to date we have observed charge resolutions of 0.25e fwhm for 727 MeV/nucleon 40 Ar and 0.30e fwhm for 1.08 GeV/nucleon 139 La and 139 La fragments. Vertical position and multiple track determination are obtained by using time projection chamber electronics. Preliminary tests indicate that the position resolution is also very good with σ≅100 μm. (orig.)

  14. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  15. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  16. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  17. Trace element analysis of environmental samples by multiple prompt gamma-ray analysis method

    International Nuclear Information System (INIS)

    Oshima, Masumi; Matsuo, Motoyuki; Shozugawa, Katsumi

    2011-01-01

    The multiple γ-ray detection method has been proved to be a high-resolution and high-sensitivity method in application to nuclide quantification. The neutron prompt γ-ray analysis method is successfully extended by combining it with the γ-ray detection method, which is called Multiple prompt γ-ray analysis, MPGA. In this review we show the principle of this method and its characteristics. Several examples of its application to environmental samples, especially river sediments in the urban area and sea sediment samples are also described. (author)

  18. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  19. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  20. Synthetic Multiple-Imputation Procedure for Multistage Complex Samples

    Directory of Open Access Journals (Sweden)

    Zhou Hanzhi

    2016-03-01

    Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.

  1. Characterizing interdependencies of multiple time series theory and applications

    CERN Document Server

    Hosoya, Yuzo; Takimoto, Taro; Kinoshita, Ryo

    2017-01-01

    This book introduces academic researchers and professionals to the basic concepts and methods for characterizing interdependencies of multiple time series in the frequency domain. Detecting causal directions between a pair of time series and the extent of their effects, as well as testing the non existence of a feedback relation between them, have constituted major focal points in multiple time series analysis since Granger introduced the celebrated definition of causality in view of prediction improvement. Causality analysis has since been widely applied in many disciplines. Although most analyses are conducted from the perspective of the time domain, a frequency domain method introduced in this book sheds new light on another aspect that disentangles the interdependencies between multiple time series in terms of long-term or short-term effects, quantitatively characterizing them. The frequency domain method includes the Granger noncausality test as a special case. Chapters 2 and 3 of the book introduce an i...

  2. Detecting chaos in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  3. Salmonella testing of pooled pre-enrichment broth cultures for screening multiple food samples.

    Science.gov (United States)

    Price, W R; Olsen, R A; Hunter, J E

    1972-04-01

    A method has been described for testing multiple food samples for Salmonella without loss in sensitivity. The method pools multiple pre-enrichment broth cultures into single enrichment broths. The subsequent stages of the Salmonella analysis are not altered. The method was found applicable to several dry food materials including nonfat dry milk, dried egg albumin, cocoa, cottonseed flour, wheat flour, and shredded coconut. As many as 25 pre-enrichment broth cultures were pooled without apparent loss in the sensitivity of Salmonella detection as compared to individual sample analysis. The procedure offers a simple, yet effective, way to increase sample capacity in the Salmonella testing of foods, particularly where a large proportion of samples ordinarily is negative. It also permits small portions of pre-enrichment broth cultures to be retained for subsequent individual analysis if positive tests are found. Salmonella testing of pooled pre-enrichment broths provides increased consumer protection for a given amount of analytical effort as compared to individual sample analysis.

  4. A Timing-Driven Partitioning System for Multiple FPGAs

    Directory of Open Access Journals (Sweden)

    Kalapi Roy

    1996-01-01

    Full Text Available Field-programmable systems with multiple FPGAs on a PCB or an MCM are being used by system designers when a single FPGA is not sufficient. We address the problem of partitioning a large technology mapped FPGA circuit onto multiple FPGA devices of a specific target technology. The physical characteristics of the multiple FPGA system (MFS pose additional constraints to the circuit partitioning algorithms: the capacity of each FPGA, the timing constraints, the number of I/Os per FPGA, and the pre-designed interconnection patterns of each FPGA and the package. Existing partitioning techniques which minimize just the cut sizes of partitions fail to satisfy the above challenges. We therefore present a timing driven N-way partitioning algorithm based on simulated annealing for technology-mapped FPGA circuits. The signal path delays are estimated during partitioning using a timing model specific to a multiple FPGA architecture. The model combines all possible delay factors in a system with multiple FPGA chips of a target technology. Furthermore, we have incorporated a new dynamic net-weighting scheme to minimize the number of pin-outs for each chip. Finally, we have developed a graph-based global router for pin assignment which can handle the pre-routed connections of our MFS structure. In order to reduce the time spent in the simulated annealing phase of the partitioner, clusters of circuit components are identified by a new linear-time bottom-up clustering algorithm. The annealing-based N-way partitioner executes four times faster using the clusters as opposed to a flat netlist with improved partitioning results. For several industrial circuits, our approach outperforms the recursive min-cut bi-partitioning algorithm by 35% in terms of nets cut. Our approach also outperforms an industrial FPGA partitioner by 73% on average in terms of unroutable nets. Using the performance optimization capabilities in our approach we have successfully partitioned the

  5. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    Science.gov (United States)

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  6. Multiple-ion-beam time-of-flight mass spectrometer

    International Nuclear Information System (INIS)

    Rohrbacher, Andreas; Continetti, Robert E.

    2001-01-01

    An innovative approach to increase the throughput of mass spectrometric analyses using a multiple-ion-beam mass spectrometer is described. Two sample spots were applied onto a laser desorption/ionization target and each spot was simultaneously irradiated by a beam of quadrupled Nd:YLF laser radiation (261.75 nm) to produce ions by laser-desorption ionization. Acceleration of the ions in an electric field created parallel ion beams that were focused by two parallel einzel lens systems. After a flight path of 2.34 m, the ions were detected with a microchannel plate-phosphor screen assembly coupled with a charge coupled device camera that showed two resolved ion beams. Time-of-flight mass spectra were also obtained with this detector. Experiments were performed using both metal atom cations (Ti + and Cr + ) produced by laser desorption/ionization and the molecular ions of two different proteins (myoglobin and lysozyme), created by matrix assisted laser desorption/ionization using an excess of nicotinic acid as matrix

  7. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  8. Relativity theory and time perception: single or multiple clocks?

    Science.gov (United States)

    Buhusi, Catalin V; Meck, Warren H

    2009-07-22

    Current theories of interval timing assume that humans and other animals time as if using a single, absolute stopwatch that can be stopped or reset on command. Here we evaluate the alternative view that psychological time is represented by multiple clocks, and that these clocks create separate temporal contexts by which duration is judged in a relative manner. Two predictions of the multiple-clock hypothesis were tested. First, that the multiple clocks can be manipulated (stopped and/or reset) independently. Second, that an event of a given physical duration would be perceived as having different durations in different temporal contexts, i.e., would be judged differently by each clock. Rats were trained to time three durations (e.g., 10, 30, and 90 s). When timing was interrupted by an unexpected gap in the signal, rats reset the clock used to time the "short" duration, stopped the "medium" duration clock, and continued to run the "long" duration clock. When the duration of the gap was manipulated, the rats reset these clocks in a hierarchical order, first the "short", then the "medium", and finally the "long" clock. Quantitative modeling assuming re-allocation of cognitive resources in proportion to the relative duration of the gap to the multiple, simultaneously timed event durations was used to account for the results. These results indicate that the three event durations were effectively timed by separate clocks operated independently, and that the same gap duration was judged relative to these three temporal contexts. Results suggest that the brain processes the duration of an event in a manner similar to Einstein's special relativity theory: A given time interval is registered differently by independent clocks dependent upon the context.

  9. Relativity theory and time perception: single or multiple clocks?

    Directory of Open Access Journals (Sweden)

    Catalin V Buhusi

    2009-07-01

    Full Text Available Current theories of interval timing assume that humans and other animals time as if using a single, absolute stopwatch that can be stopped or reset on command. Here we evaluate the alternative view that psychological time is represented by multiple clocks, and that these clocks create separate temporal contexts by which duration is judged in a relative manner. Two predictions of the multiple-clock hypothesis were tested. First, that the multiple clocks can be manipulated (stopped and/or reset independently. Second, that an event of a given physical duration would be perceived as having different durations in different temporal contexts, i.e., would be judged differently by each clock.Rats were trained to time three durations (e.g., 10, 30, and 90 s. When timing was interrupted by an unexpected gap in the signal, rats reset the clock used to time the "short" duration, stopped the "medium" duration clock, and continued to run the "long" duration clock. When the duration of the gap was manipulated, the rats reset these clocks in a hierarchical order, first the "short", then the "medium", and finally the "long" clock. Quantitative modeling assuming re-allocation of cognitive resources in proportion to the relative duration of the gap to the multiple, simultaneously timed event durations was used to account for the results.These results indicate that the three event durations were effectively timed by separate clocks operated independently, and that the same gap duration was judged relative to these three temporal contexts. Results suggest that the brain processes the duration of an event in a manner similar to Einstein's special relativity theory: A given time interval is registered differently by independent clocks dependent upon the context.

  10. Multiple Time-Instances Features of Degraded Speech for Single Ended Quality Measurement

    Directory of Open Access Journals (Sweden)

    Rajesh Kumar Dubey

    2017-01-01

    Full Text Available The use of single time-instance features, where entire speech utterance is used for feature computation, is not accurate and adequate in capturing the time localized information of short-time transient distortions and their distinction from plosive sounds of speech, particularly degraded by impulsive noise. Hence, the importance of estimating features at multiple time-instances is sought. In this, only active speech segments of degraded speech are used for features computation at multiple time-instances on per frame basis. Here, active speech means both voiced and unvoiced frames except silence. The features of different combinations of multiple contiguous active speech segments are computed and called multiple time-instances features. The joint GMM training has been done using these features along with the subjective MOS of the corresponding speech utterance to obtain the parameters of GMM. These parameters of GMM and multiple time-instances features of test speech are used to compute the objective MOS values of different combinations of multiple contiguous active speech segments. The overall objective MOS of the test speech utterance is obtained by assigning equal weight to the objective MOS values of the different combinations of multiple contiguous active speech segments. This algorithm outperforms the Recommendation ITU-T P.563 and recently published algorithms.

  11. A multiple sampling ionization chamber for the External Target Facility

    International Nuclear Information System (INIS)

    Zhang, X.H.; Tang, S.W.; Ma, P.; Lu, C.G.; Yang, H.R.; Wang, S.T.; Yu, Y.H.; Yue, K.; Fang, F.; Yan, D.; Zhou, Y.; Wang, Z.M.; Sun, Y.; Sun, Z.Y.; Duan, L.M.; Sun, B.H.

    2015-01-01

    A multiple sampling ionization chamber used as a particle identification device for high energy heavy ions has been developed for the External Target Facility. The performance of this detector was tested with a 239 Pu α source and RI beams. A Z resolution (FWHM) of 0.4–0.6 was achieved for nuclear fragments of 18 O at 400 AMeV

  12. Detection of Salmonella spp. in veterinary samples by combining selective enrichment and real-time PCR.

    Science.gov (United States)

    Goodman, Laura B; McDonough, Patrick L; Anderson, Renee R; Franklin-Guild, Rebecca J; Ryan, James R; Perkins, Gillian A; Thachil, Anil J; Glaser, Amy L; Thompson, Belinda S

    2017-11-01

    Rapid screening for enteric bacterial pathogens in clinical environments is essential for biosecurity. Salmonella found in veterinary hospitals, particularly Salmonella enterica serovar Dublin, can pose unique challenges for culture and testing because of its poor growth. Multiple Salmonella serovars including Dublin are emerging threats to public health given increasing prevalence and antimicrobial resistance. We adapted an automated food testing method to veterinary samples and evaluated the performance of the method in a variety of matrices including environmental samples ( n = 81), tissues ( n = 52), feces ( n = 148), and feed ( n = 29). A commercial kit was chosen as the basis for this approach in view of extensive performance characterizations published by multiple independent organizations. A workflow was established for efficiently and accurately testing veterinary matrices and environmental samples by use of real-time PCR after selective enrichment in Rappaport-Vassiliadis soya (RVS) medium. Using this method, the detection limit for S. Dublin improved by 100-fold over subculture on selective agars (eosin-methylene blue, brilliant green, and xylose-lysine-deoxycholate). Overall, the procedure was effective in detecting Salmonella spp. and provided next-day results.

  13. Gait initiation time is associated with the risk of multiple falls-A population-based study.

    Science.gov (United States)

    Callisaya, Michele L; Blizzard, Leigh; Martin, Kara; Srikanth, Velandai K

    2016-09-01

    In a population-based study of older people to examine whether 1) overall gait initiation (GI) time or its components are associated with falls and 2) GI under dual-task is a stronger predictor of falls risk than under single-task. Participants aged 60-85 years were randomly selected from the electoral roll. GI was obtained with a force platform under both single and dual-task conditions. Falls were ascertained prospectively over a 12-month period. Log multinomial regression was used to examine the association between GI time (total and its components) and risk of single and multiple falls. Age, sex and physiological and cognitive falls risk factors were considered as confounders. The mean age of the sample (n=124) was 71.0 (SD 6.8) years and 58.9% (n=73) were male. Over 12 months 21.8% (n=27) of participants reported a single fall and 16.1% (n=20) reported multiple falls. Slower overall GI time under both single (RR all per 100ms 1.28, 95%CI 1.03, 1.58) and dual-task (RR 1.14, 95%CI 1.02, 1.27) was associated with increased risk of multiple, but not single falls (pfalls were also associated with slower time to first lateral movement under single-task (RR 1.90 95%CI 0.59, 1.51) and swing time under dual-task condition (RR 1.44 95%CI 1.08, 1.94). Slower GI time is associated with the risk of multiple falls independent of other risk factors, suggesting it could be used as part of a comprehensive falls assessment. Time to the first lateral movement under single-task may be the best measures of this risk. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Real-time multiple networked viewer capability of the DIII-D EC data acquisition system

    International Nuclear Information System (INIS)

    Ponce, D.; Gorelov, I.A.; Chiu, H.K.; Baity, F.W.

    2005-01-01

    A data acquisition system (DAS) which permits real-time viewing by multiple locally networked operators is being implemented for the electron cyclotron (EC) heating and current drive system at DIII-D. The DAS is expected to demonstrate performance equivalent to standalone oscilloscopes. Participation by remote viewers, including throughout the greater DIII-D facility, can also be incorporated. The real-time system uses one computer-controlled DAS per gyrotron. The DAS computers send their data to a central data server using individual and dedicated 200 Mbps fully duplexed Ethernet connections. The server has a dedicated 10 krpm hard drive for each gyrotron DAS. Selected channels can then be reprocessed and distributed to viewers over a standard local area network (LAN). They can also be bridged from the LAN to the internet. Calculations indicate that the hardware will support real-time writing of each channel at full resolution to the server hard drives. The data will be re-sampled for distribution to multiple viewers over the LAN in real-time. The hardware for this system is in place. The software is under development. This paper will present the design details and up-to-date performance metrics of the system

  15. Imaging of first-order surface-related multiples by reverse-time migration

    Science.gov (United States)

    Liu, Xuejian; Liu, Yike; Hu, Hao; Li, Peng; Khan, Majid

    2017-02-01

    Surface-related multiples have been utilized in the reverse-time migration (RTM) procedures, and additional illumination for subsurface can be provided. Meanwhile, many cross-talks are generated from undesired interactions between forward- and backward-propagated seismic waves. In this paper, subsequent to analysing and categorizing these cross-talks, we propose RTM of first-order multiples to avoid most undesired interactions in RTM of all-order multiples, where only primaries are forward-propagated and crosscorrelated with the backward-propagated first-order multiples. With primaries and multiples separated during regular seismic data processing as the input data, first-order multiples can be obtained by a two-step scheme: (1) the dual-prediction of higher-order multiples; and (2) the adaptive subtraction of predicted higher-order multiples from all-order multiples within local offset-time windows. In numerical experiments, two synthetic and a marine field data sets are used, where different cross-talks generated by RTM of all-order multiples can be identified and the proposed RTM of first-order multiples can provide a very interpretable image with a few cross-talks.

  16. Analysis of stationary power/amplitude distributions for multiple channels of sampled FBGs.

    Science.gov (United States)

    Xing, Ya; Zou, Xihua; Pan, Wei; Yan, Lianshan; Luo, Bin; Shao, Liyang

    2015-08-10

    Stationary power/amplitude distributions for multiple channels of the sampled fiber Bragg grating (SFBG) along the grating length are analyzed. Unlike a uniform FBG, the SFBG has multiple channels in the reflection spectrum, not a single channel. Thus, the stationary power/amplitude distributions for these multiple channels are analyzed by using two different theoretical models. In the first model, the SFBG is regarded as a set of grating sections and non-grating sections, which are alternately stacked. A step-like distribution is obtained for the corresponding power/amplitude of each channel along the grating length. While, in the second model, the SFBG is decomposed into multiple uniform "ghost" gratings, and a continuous distribution is obtained for each ghost grating (i.e., each channel). After a comparison, the distributions obtained in the two models are identical, and the equivalence between the two models is demonstrated. In addition, the impacts of the duty cycle on the power/amplitude distributions of multiple channels of SFBG are presented.

  17. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  18. Importance Sampling for Stochastic Timed Automata

    DEFF Research Database (Denmark)

    Jegourel, Cyrille; Larsen, Kim Guldstrand; Legay, Axel

    2016-01-01

    We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state-wise cha......We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state...

  19. Efficient Processing of Multiple DTW Queries in Time Series Databases

    DEFF Research Database (Denmark)

    Kremer, Hardy; Günnemann, Stephan; Ivanescu, Anca-Maria

    2011-01-01

    . In many of today’s applications, however, large numbers of queries arise at any given time. Existing DTW techniques do not process multiple DTW queries simultaneously, a serious limitation which slows down overall processing. In this paper, we propose an efficient processing approach for multiple DTW...... for multiple DTW queries....

  20. The Part-Time Student Role: Implications for the Emotional Experience of Managing Multiple Roles amongst Hong Kong Public Health Nurses.

    Science.gov (United States)

    Shiu, Ann Tak-Ying

    1999-01-01

    Nine public-health nurses studying part time and 11 other nurses sampled their mood states randomly over seven days. The part-time student role created additional strain for nurses with children. The stress of managing multiple roles was greatest when both work and nonwork role responsibilities were heavy. (SK)

  1. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  2. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  3. Quantification of multiple elements in dried blood spot samples

    DEFF Research Database (Denmark)

    Pedersen, Lise; Andersen-Ranberg, Karen; Hollergaard, Mads

    2017-01-01

    BACKGROUND: Dried blood spots (DBS) is a unique matrix that offers advantages compared to conventional blood collection making it increasingly popular in large population studies. We here describe development and validation of a method to determine multiple elements in DBS. METHODS: Elements were...... in venous blood. Samples with different hematocrit were spotted onto filter paper to assess hematocrit effect. RESULTS: The established method was precise and accurate for measurement of most elements in DBS. There was a significant but relatively weak correlation between measurement of the elements Mg, K...

  4. Tracking Multiple People Online and in Real Time

    Science.gov (United States)

    2015-12-21

    NO. 0704-0188 3. DATES COVERED (From - To) - UU UU UU UU 21-12-2015 Approved for public release; distribution is unlimited. Tracking multiple people ...online and in real time We cast the problem of tracking several people as a graph partitioning problem that takes the form of an NP-hard binary...PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. Duke University 2200 West Main Street Suite 710 Durham, NC 27705 -4010 ABSTRACT Tracking multiple

  5. The hybrid model for sampling multiple elastic scattering angular deflections based on Goudsmit-Saunderson theory

    Directory of Open Access Journals (Sweden)

    Wasaye Muhammad Abdul

    2017-01-01

    Full Text Available An algorithm for the Monte Carlo simulation of electron multiple elastic scattering based on the framework of SuperMC (Super Monte Carlo simulation program for nuclear and radiation process is presented. This paper describes efficient and accurate methods by which the multiple scattering angular deflections are sampled. The Goudsmit-Saunderson theory of multiple scattering has been used for sampling angular deflections. Differential cross-sections of electrons and positrons by neutral atoms have been calculated by using Dirac partial wave program ELSEPA. The Legendre coefficients are accurately computed by using the Gauss-Legendre integration method. Finally, a novel hybrid method for sampling angular distribution has been developed. The model uses efficient rejection sampling method for low energy electrons (500 mean free paths. For small path lengths, a simple, efficient and accurate analytical distribution function has been proposed. The later uses adjustable parameters determined from the fitting of Goudsmith-Saunderson angular distribution. A discussion of the sampling efficiency and accuracy of this newly developed algorithm is given. The efficiency of rejection sampling algorithm is at least 50 % for electron kinetic energies less than 500 keV and longer path lengths (>500 mean free paths. Monte Carlo Simulation results are then compared with measured angular distributions of Ross et al. The comparison shows that our results are in good agreement with experimental measurements.

  6. A multiple model approach to respiratory motion prediction for real-time IGRT

    International Nuclear Information System (INIS)

    Putra, Devi; Haas, Olivier C L; Burnham, Keith J; Mills, John A

    2008-01-01

    Respiration induces significant movement of tumours in the vicinity of thoracic and abdominal structures. Real-time image-guided radiotherapy (IGRT) aims to adapt radiation delivery to tumour motion during irradiation. One of the main problems for achieving this objective is the presence of time lag between the acquisition of tumour position and the radiation delivery. Such time lag causes significant beam positioning errors and affects the dose coverage. A method to solve this problem is to employ an algorithm that is able to predict future tumour positions from available tumour position measurements. This paper presents a multiple model approach to respiratory-induced tumour motion prediction using the interacting multiple model (IMM) filter. A combination of two models, constant velocity (CV) and constant acceleration (CA), is used to capture respiratory-induced tumour motion. A Kalman filter is designed for each of the local models and the IMM filter is applied to combine the predictions of these Kalman filters for obtaining the predicted tumour position. The IMM filter, likewise the Kalman filter, is a recursive algorithm that is suitable for real-time applications. In addition, this paper proposes a confidence interval (CI) criterion to evaluate the performance of tumour motion prediction algorithms for IGRT. The proposed CI criterion provides a relevant measure for the prediction performance in terms of clinical applications and can be used to specify the margin to accommodate prediction errors. The prediction performance of the IMM filter has been evaluated using 110 traces of 4-minute free-breathing motion collected from 24 lung-cancer patients. The simulation study was carried out for prediction time 0.1-0.6 s with sampling rates 3, 5 and 10 Hz. It was found that the prediction of the IMM filter was consistently better than the prediction of the Kalman filter with the CV or CA model. There was no significant difference of prediction errors for the

  7. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    International Nuclear Information System (INIS)

    Aspinall, M D; Joyce, M J; Mackin, R O; Jarrah, Z; Boston, A J; Nolan, P J; Peyton, A J; Hawkes, N P

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s −1 . Events arising from the 7 Li(p, n) 7 Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential

  8. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    Science.gov (United States)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  9. Hierarchical sampling of multiple strata: an innovative technique in exposure characterization

    International Nuclear Information System (INIS)

    Ericson, J.E.; Gonzalez, Elisabeth J.

    2003-01-01

    Sampling of multiple strata, or hierarchical sampling of various exposure sources and activity areas, has been tested and is suggested as a method to sample (or to locate) areas with a high prevalence of elevated blood lead in children. Hierarchical sampling was devised to supplement traditional soil lead sampling of a single stratum, either residential or fixed point source, using a multistep strategy. Blood lead (n=1141) and soil lead (n=378) data collected under the USEPA/UCI Tijuana Lead Project (1996-1999) were analyzed to evaluate the usefulness of sampling soil lead from background sites, schools and parks, point sources, and residences. Results revealed that industrial emissions have been a contributing factor to soil lead contamination in Tijuana. At the regional level, point source soil lead was associated with mean blood lead levels and concurrent high background, and point source soil lead levels were predictive of a high percentage of subjects with blood lead equal to or greater than 10 μg/dL (pe 10). Significant relationships were observed between mean blood lead level and fixed point source soil lead (r=0.93; P 2 =0.72 using a quadratic model) and between residential soil lead and fixed point source soil lead (r=0.90; P 2 =0.86 using a cubic model). This study suggests that point sources alone are not sufficient for predicting the relative risk of exposure to lead in the urban environment. These findings will be useful in defining regions for targeted or universal soil lead sampling by site type. Point sources have been observed to be predictive of mean blood lead at the regional level; however, this relationship alone was not sufficient to predict pe 10. It is concluded that when apparently undisturbed sites reveal high soil lead levels in addition to local point sources, dispersion of lead is widespread and will be associated with a high prevalence of elevated blood lead in children. Multiple strata sampling was shown to be useful in

  10. The multiple disk chopper neutron time-of-flight spectrometer at NIST

    International Nuclear Information System (INIS)

    Altorfer, F.B.; Cook, J.C.; Copley, J.R.D.

    1995-01-01

    A highly versatile multiple disk chopper neutron time-of-flight spectrometer is being installed at the Cold Neutron Research Facility of the National institute of Standards and Technology. This new instrument will fill an important gap in the portfolio of neutron inelastic scattering spectrometers in North America. It will be used for a wide variety of experiments such as studies of magnetic and vibrational excitations, tunneling spectroscopy, and quasielastic neutron scattering investigations of local and translational diffusion. The instrument uses disk choppers to monochromate and pulse the incident beam, and the energy changes of scattered neutrons are determined from their times-of-flight to a large array of detectors. The disks and the guide have been designed to make the instrument readily adaptable to the specific performance requirements of experimenters. The authors present important aspects of the design, as well as estimated values of the flux at the sample and the energy resolution for elastic scattering. The instrument should be operational in 1996

  11. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  12. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  13. GENESIS 1.1: A hybrid-parallel molecular dynamics simulator with enhanced sampling algorithms on multiple computational platforms.

    Science.gov (United States)

    Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji

    2017-09-30

    GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Identification of multiple mRNA and DNA sequences from small tissue samples isolated by laser-assisted microdissection.

    Science.gov (United States)

    Bernsen, M R; Dijkman, H B; de Vries, E; Figdor, C G; Ruiter, D J; Adema, G J; van Muijen, G N

    1998-10-01

    Molecular analysis of small tissue samples has become increasingly important in biomedical studies. Using a laser dissection microscope and modified nucleic acid isolation protocols, we demonstrate that multiple mRNA as well as DNA sequences can be identified from a single-cell sample. In addition, we show that the specificity of procurement of tissue samples is not compromised by smear contamination resulting from scraping of the microtome knife during sectioning of lesions. The procedures described herein thus allow for efficient RT-PCR or PCR analysis of multiple nucleic acid sequences from small tissue samples obtained by laser-assisted microdissection.

  15. Multiple category-lot quality assurance sampling: a new classification system with application to schistosomiasis control.

    Directory of Open Access Journals (Sweden)

    Casey Olives

    Full Text Available Originally a binary classifier, Lot Quality Assurance Sampling (LQAS has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%, and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa.We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa.Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87. In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50, the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error.This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.

  16. Leisure time activities of Iranian patients with multiple sclerosis: a qualitative study.

    Science.gov (United States)

    Hosseini, Seyed Mohammad Sadegh; Asgari, Ali; Rassafiani, Mehdi; Yazdani, Farzaneh; Mazdeh, Mehrdokht

    2016-01-01

    Leisure time is one of the most important aspects of life, especially for people with chronic diseases. The concept and types of leisure have frequently been evaluated in different socio-cultural populations. The aim of this study was to identify the nature of leisure activities among a sample of Iranian patients with multiple sclerosis (MS) and classify the identified types of activities in the context of Iranian culture. In this qualitative study, semi-structured interview was applied to gather data from 34 MS patients that were selected through purposive sampling. The interviews were continued up to the point of saturation. Content analysis was used to explore experiences of the interviewees regarding their leisure activities. Six categories of leisure activities were extracted for the studied patients with MS i.e.physical, social, individual, art/cultural, educational and spiritual/religious. The results represented the range and heterogeneity of leisure activities amongst the MS patients. Considering participation in spiritual/religious and social activities as leisure time undertaking might reflect cultural diversity in the perception and use of time for recreation. For mental health promotion purposes, paying special attention to the types of activities that people of different socio-cultural background choose for their refreshment could help health care providers in giving tailored advice for patients with MS and other chronic debilitating disease.

  17. Leisure time activities of Iranian patients with multiple sclerosis: a qualitative study

    Science.gov (United States)

    Hosseini, Seyed Mohammad Sadegh; Asgari, Ali; Rassafiani, Mehdi; Yazdani, Farzaneh; Mazdeh, Mehrdokht

    2016-01-01

    Background: Leisure time is one of the most important aspects of life, especially for people with chronic diseases. The concept and types of leisure have frequently been evaluated in different socio-cultural populations. The aim of this study was to identify the nature of leisure activities among a sample of Iranian patients with multiple sclerosis (MS) and classify the identified types of activities in the context of Iranian culture. Methods: In this qualitative study, semi-structured interview was applied to gather data from 34 MS patients that were selected through purposive sampling. The interviews were continued up to the point of saturation. Content analysis was used to explore experiences of the interviewees regarding their leisure activities. Results: Six categories of leisure activities were extracted for the studied patients with MS i.e.physical, social, individual, art/cultural, educational and spiritual/religious. Conclusion: The results represented the range and heterogeneity of leisure activities amongst the MS patients. Considering participation in spiritual/religious and social activities as leisure time undertaking might reflect cultural diversity in the perception and use of time for recreation. For mental health promotion purposes, paying special attention to the types of activities that people of different socio-cultural background choose for their refreshment could help health care providers in giving tailored advice for patients with MS and other chronic debilitating disease. PMID:27123437

  18. Sampling returns for realized variance calculations: tick time or transaction time?

    NARCIS (Netherlands)

    Griffin, J.E.; Oomen, R.C.A.

    2008-01-01

    This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are

  19. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  20. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  1. MicroRNA expression profiles of multiple system atrophy from formalin-fixed paraffin-embedded samples.

    Science.gov (United States)

    Wakabayashi, Koichi; Mori, Fumiaki; Kakita, Akiyoshi; Takahashi, Hitoshi; Tanaka, Shinya; Utsumi, Jun; Sasaki, Hidenao

    2016-12-02

    MicroRNAs (miRNAs) are small noncoding RNAs that regulate gene expression. Recently, we have shown that informative miRNA data can be derived from archived formalin-fixed paraffin-embedded (FFPE) samples from postmortem cases of amyotrophic lateral sclerosis and normal controls. miRNA analysis has now been performed on FFPE samples from affected brain regions in patients with multiple system atrophy (MSA) and the same areas in neurologically normal controls. We evaluated 50 samples from patients with MSA (n=13) and controls (n=13). Twenty-six samples were selected for miRNA analysis on the basis of the criteria reported previously: (i) a formalin fixation time of less than 4 weeks, (ii) a total RNA yield per sample of more than 500ng, and (iii) sufficient quality of the RNA electrophoresis pattern. These included 11 cases of MSA and 5 controls. Thus, the success rate for analysis of RNA from FFPE samples was 52% (26 of 50). For MSA, a total of 395 and 383 miRNAs were identified in the pons and cerebellum, respectively; 5 were up-regulated and 33 were down-regulated in the pons and 5 were up-regulated and 18 were down-regulated in the cerebellum. Several miRNAs down-regulated in the pons (miR-129-2-3p and miR-129-5p) and cerebellum (miR-129-2-3p, miR-129-5p and miR-132-3p) had already been identified in frozen cerebellum from MSA patients. These findings suggest that archived FFPE postmortem samples can be a valuable source for miRNA profiling in MSA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  3. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  4. Multiple sample setup for testing the hydrothermal stability of adsorbents in thermal energy storage applications

    International Nuclear Information System (INIS)

    Fischer, Fabian; Laevemann, Eberhard

    2015-01-01

    Thermal energy storage based on adsorption and desorption of water on an adsorbent can achieve high energy storage densities. Many adsorbents lose adsorption capacity when operated under unfavourable hydrothermal conditions during adsorption and desorption. The stability of an adsorbent against stressing hydrothermal conditions is a key issue for its usability in adsorption thermal energy storage. We built an experimental setup that simultaneously controls the hydrothermal conditions of 16 samples arranged in a matrix of four temperatures and four water vapour pressures. This setup allows the testing of potential adsorbents between temperatures of 50 °C and 350 °C and water vapour pressures of up to 32 kPa. A measurement procedure that allows the detection of the hydrothermal stability of an adsorbent after defined time spans has been designed. We verified the functionality of the multiple sample measurements with a microporous adsorbent, a zeolite NaMSX. The hydrothermal stability of this zeolite is tested by water uptake measurements. A standard deviation lower than 1% of the 16 samples for detecting the hydrothermal stability enables setting different conditions in each sample cell. Further, we compared the water uptake measurements by measuring their adsorption isotherms with the volumetric device BELSORP Aqua 3 from Bel Japan. (paper)

  5. Multiple time step integrators in ab initio molecular dynamics

    International Nuclear Information System (INIS)

    Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.

    2014-01-01

    Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy

  6. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.

    Science.gov (United States)

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.

  7. Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes

    Science.gov (United States)

    Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian

    2015-01-01

    Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903

  8. Integration of paper-based microarray and time-of-flight secondary ion mass spectrometry (ToF-SIMS) for parallel detection and quantification of molecules in multiple samples automatically.

    Science.gov (United States)

    Chu, Kuo-Jui; Chen, Po-Chun; You, Yun-Wen; Chang, Hsun-Yun; Kao, Wei-Lun; Chu, Yi-Hsuan; Wu, Chen-Yi; Shyue, Jing-Jong

    2018-04-16

    With its low-cost fabrication and ease of modification, paper-based analytical devices have developed rapidly in recent years. Microarrays allow automatic analysis of multiple samples or multiple reactions with minimal sample consumption. While cellulose paper is generally used, its high backgrounds in spectrometry outside of the visible range has limited its application to be mostly colorimetric analysis. In this work, glass-microfiber paper is used as the substrate for a microarray. The glass-microfiber is essentially chemically inert SiO x , and the lower background from this inorganic microfiber can avoid interference from organic analytes in various spectrometers. However, generally used wax printing fails to wet glass microfibers to form hydrophobic barriers. Therefore, to prepare the hydrophobic-hydrophilic pattern, the glass-microfiber paper was first modified with an octadecyltrichlorosilane (OTS) self-assembled monolayer (SAM) to make the paper hydrophobic. A hydrophilic microarray was then prepared using a CO 2 laser scriber that selectively removed the OTS layer with a designed pattern. One microliter of aqueous drops of peptides at various concentrations were then dispensed inside the round patterns where OTS SAM was removed while the surrounding area with OTS layer served as a barrier to separate each drop. The resulting specimen of multiple spots was automatically analyzed with a time-of-flight secondary ion mass spectrometer (ToF-SIMS), and all of the secondary ions were collected. Among the various cluster ions that have developed over the past decade, pulsed C 60 + was selected as the primary ion because of its high secondary ion intensity in the high mass region, its minimal alteration of the surface when operating within the static-limit and spatial resolution at the ∼μm level. In the resulting spectra, parent ions of various peptides (in the forms [M+H] + and [M+Na] + ) were readily identified for parallel detection of molecules in a mixture

  9. A digital silicon photomultiplier with multiple time-to-digital converters

    Energy Technology Data Exchange (ETDEWEB)

    Garutti, Erika [University Hamburg (Germany); Silenzi, Alessandro [DESY, Hamburg (Germany); Xu, Chen [DESY, Hamburg (Germany); University Hamburg (Germany)

    2013-07-01

    A silicon photomultiplier (SiPM) with pixel level signal digitization and column-wise connected time-to-digital converters (TDCs) has been developed for an endoscopic Positron Emission Tomography (PET) detector. A digital SiPM has pixels consist of a single photon avalanche diode (SPAD) and circuit elements to optimize overall dark counts and temporal response. Compared with conventional analog SiPM, digital SiPM's direct signal route from SPAD to TDC improves single photon time resolution. In addition, using multiple TDCs can perform the statistical estimation of the time-of-arrival in multiple photon detection case such as readout of scintillation crystals. Characterization measurements of the prototype digital SiPM and a Monte-Carlo simulation to predict the timing performance of the PET detector are shown.

  10. Evaluation of Multiple-Sampling Function used with a Microtek flatbed scanner for Radiation Dosimetry Calibration of EBT2 Film

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Liyun [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung 82445, Taiwan (China); Ho, Sheng-Yow [Department of Nursing, Chang Jung Christian University, Tainan 71101, Taiwan (China); Department of Radiation Oncology, Chi Mei Medical Center, Liouying, Tainan 73657, Taiwan (China); Ding, Hueisch-Jy [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung 82445, Taiwan (China); Hwang, Ing-Ming [Department of Medical Imaging and Radiology, Shu Zen College of Medicine and Management, Kaohsiung 82144, Taiwan (China); Chen, Pang-Yu, E-mail: pangyuchen@yahoo.com.tw [Department of Radiation Oncology, Sinlau Christian Hospital, Tainan 70142, Taiwan (China); Lee, Tsair-Fwu, E-mail: tflee@kuas.edu.tw [Medical Physics and Informatics Laboratory, Department of Electronics Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung 80778, Taiwan (China)

    2016-10-01

    The radiochromic EBT2 film is a widely used quality assurance device for radiation therapy. This study evaluated the film calibration performance of the multiple-sampling function, a function of the ScanWizard Pro scanning software provided by the manufacturer, when used with Microtek 9800XL plus (9800XL{sup +}) flatbed scanner. By using the PDD method, each one of the eight EBT2 films, four delivered by 290 monitor unit (MU) and four by 88 MU via 6-MV photon beams, was tightly sandwiched in a 30{sup 3}-cm{sup 3} water equivalent polystyrene phantom prior to irradiation. Before and after irradiation, all films were scanned using the Microtek 9800XL{sup +} scanner with five different modes of the multiple-sampling function, which could generate the image with the averaged result of multiple-sampling. The net optical densities (netOD) on the beam central axis of film were assigned to corresponding depth doses for calibration. For each sampling mode with either delivered MU, the depth-dose uncertainty of a single film from repeated scans and that of a single scan of the four films were analyzed. Finally, the calibration error and the combined calibration uncertainty between film determined depth-doses and delivered depth-doses were calculated and evaluated for each sampling mode. All standard deviations and the calibration error were demonstrated to be unrelated to the number of sampling lines. The calibration error of the 2-line and 16-line mode was within 3 cGy and better than that of the other modes. The combined uncertainty of the 2-line mode was the lowest, which was generally less than 6 cGy except for the delivered dose around 100 cGy. The evaluation described herein revealed that the EBT2 film calibrated with the 2-line mode has relatively lower error, scanning time and combined uncertianty. Therefore, it is recommended for routine EBT2 film calibration and verification of treatment plans.

  11. Evaluation of Multiple-Sampling Function used with a Microtek flatbed scanner for Radiation Dosimetry Calibration of EBT2 Film

    International Nuclear Information System (INIS)

    Chang, Liyun; Ho, Sheng-Yow; Ding, Hueisch-Jy; Hwang, Ing-Ming; Chen, Pang-Yu; Lee, Tsair-Fwu

    2016-01-01

    The radiochromic EBT2 film is a widely used quality assurance device for radiation therapy. This study evaluated the film calibration performance of the multiple-sampling function, a function of the ScanWizard Pro scanning software provided by the manufacturer, when used with Microtek 9800XL plus (9800XL + ) flatbed scanner. By using the PDD method, each one of the eight EBT2 films, four delivered by 290 monitor unit (MU) and four by 88 MU via 6-MV photon beams, was tightly sandwiched in a 30 3 -cm 3 water equivalent polystyrene phantom prior to irradiation. Before and after irradiation, all films were scanned using the Microtek 9800XL + scanner with five different modes of the multiple-sampling function, which could generate the image with the averaged result of multiple-sampling. The net optical densities (netOD) on the beam central axis of film were assigned to corresponding depth doses for calibration. For each sampling mode with either delivered MU, the depth-dose uncertainty of a single film from repeated scans and that of a single scan of the four films were analyzed. Finally, the calibration error and the combined calibration uncertainty between film determined depth-doses and delivered depth-doses were calculated and evaluated for each sampling mode. All standard deviations and the calibration error were demonstrated to be unrelated to the number of sampling lines. The calibration error of the 2-line and 16-line mode was within 3 cGy and better than that of the other modes. The combined uncertainty of the 2-line mode was the lowest, which was generally less than 6 cGy except for the delivered dose around 100 cGy. The evaluation described herein revealed that the EBT2 film calibrated with the 2-line mode has relatively lower error, scanning time and combined uncertianty. Therefore, it is recommended for routine EBT2 film calibration and verification of treatment plans.

  12. Soil Gas Sample Handling: Evaluation of Water Removal and Sample Ganging

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Brad G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Abrecht, David G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hayes, James C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mendoza, Donaldo P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Soil gas sampling is currently conducted in support of Nuclear Test Ban treaty verification. Soil gas samples are collected and analyzed for isotopes of interest. Some issues that can impact sampling and analysis of these samples are excess moisture and sample processing time. Here we discuss three potential improvements to the current sampling protocol; a desiccant for water removal, use of molecular sieve to remove CO2 from the sample during collection, and a ganging manifold to allow composite analysis of multiple samples.

  13. Smith predictor-based multiple periodic disturbance compensation for long dead-time processes

    Science.gov (United States)

    Tan, Fang; Li, Han-Xiong; Shen, Ping

    2018-05-01

    Many disturbance rejection methods have been proposed for processes with dead-time, while these existing methods may not work well under multiple periodic disturbances. In this paper, a multiple periodic disturbance rejection is proposed under the Smith predictor configuration for processes with long dead-time. One feedback loop is added to compensate periodic disturbance while retaining the advantage of the Smith predictor. With information of the disturbance spectrum, the added feedback loop can remove multiple periodic disturbances effectively. The robust stability can be easily maintained through the rigorous analysis. Finally, simulation examples demonstrate the effectiveness and robustness of the proposed method for processes with long dead-time.

  14. Multiple-reflection time-of-flight mass spectrometry for in situ applications

    Science.gov (United States)

    Dickel, T.; Plaß, W. R.; Lang, J.; Ebert, J.; Geissel, H.; Haettner, E.; Jesch, C.; Lippert, W.; Petrick, M.; Scheidenberger, C.; Yavor, M. I.

    2013-12-01

    Multiple-reflection time-of-flight mass spectrometers (MR-TOF-MS) have recently been installed at different low-energy radioactive ion beam facilities. They are used as isobar separators with high ion capacity and as mass spectrometers with high mass resolving power and accuracy for short-lived nuclei. Furthermore, MR-TOF-MS have a huge potential for applications in other fields, such as chemistry, biology, medicine, space science, and homeland security. The development, commissioning and results of an MR-TOF-MS is presented, which serves as proof-of-principle to show that very high mass resolving powers (∼105) can be achieved in a compact device (length ∼30 cm). Based on this work, an MR-TOF-MS for in situ application has been designed. For the first time, this device combines very high mass resolving power (>105), mobility, and an atmospheric pressure inlet in one instrument. It will enable in situ measurements without sample preparation at very high mass accuracy. Envisaged applications of this mobile MR-TOF-MS are discussed.

  15. Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Kim, Soo Mee; Lee, Dong Soo; Hong, Jong Hong; Sim, Kwang Souk; Rhee, June Tak

    2008-01-01

    To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 2D filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 2D reconstruction of multiple crystal layer PET data

  16. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  17. Interplay between multiple length and time scales in complex ...

    Indian Academy of Sciences (India)

    Administrator

    Processes in complex chemical systems, such as macromolecules, electrolytes, interfaces, ... by processes operating on a multiplicity of length .... real time. The design and interpretation of femto- second experiments has required considerable ...

  18. Passage times of asymmetric anomalous walks with multiple paths

    International Nuclear Information System (INIS)

    Caceres, Manuel O; Insua, G Liliana

    2005-01-01

    We investigate the transient and the long-time behaviour of asymmetric anomalous walks in heterogeneous media. Two types of disorder are worked out explicitly: weak and strong disorder; in addition, the occurrence of disordered multiple paths is considered. We calculate the first passage time distribution of the associated stochastic transport process. We discuss the occurrence of the crossover from a power law to an exponential decay for the long-time behaviour of the distribution of the first passage times of disordered biased walks

  19. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  20. Soil erosion under multiple time-varying rainfall events

    Science.gov (United States)

    Heng, B. C. Peter; Barry, D. Andrew; Jomaa, Seifeddine; Sander, Graham C.

    2010-05-01

    Soil erosion is a function of many factors and process interactions. An erosion event produces changes in surface soil properties such as texture and hydraulic conductivity. These changes in turn alter the erosion response to subsequent events. Laboratory-scale soil erosion studies have typically focused on single independent rainfall events with constant rainfall intensities. This study investigates the effect of multiple time-varying rainfall events on soil erosion using the EPFL erosion flume. The rainfall simulator comprises ten Veejet nozzles mounted on oscillating bars 3 m above a 6 m × 2 m flume. Spray from the nozzles is applied onto the soil surface in sweeps; rainfall intensity is thus controlled by varying the sweeping frequency. Freshly-prepared soil with a uniform slope was subjected to five rainfall events at daily intervals. In each 3-h event, rainfall intensity was ramped up linearly to a maximum of 60 mm/h and then stepped down to zero. Runoff samples were collected and analysed for particle size distribution (PSD) as well as total sediment concentration. We investigate whether there is a hysteretic relationship between sediment concentration and discharge within each event and how this relationship changes from event to event. Trends in the PSD of the eroded sediment are discussed and correlated with changes in sediment concentration. Close-up imagery of the soil surface following each event highlight changes in surface soil structure with time. This study enhances our understanding of erosion processes in the field, with corresponding implications for soil erosion modelling.

  1. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    Science.gov (United States)

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Adaptive Sampling of Time Series During Remote Exploration

    Science.gov (United States)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  3. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  4. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  5. Evaluation of Multiple Linear Regression-Based Limited Sampling Strategies for Enteric-Coated Mycophenolate Sodium in Adult Kidney Transplant Recipients.

    Science.gov (United States)

    Brooks, Emily K; Tett, Susan E; Isbel, Nicole M; McWhinney, Brett; Staatz, Christine E

    2018-04-01

    Although multiple linear regression-based limited sampling strategies (LSSs) have been published for enteric-coated mycophenolate sodium, none have been evaluated for the prediction of subsequent mycophenolic acid (MPA) exposure. This study aimed to examine the predictive performance of the published LSS for the estimation of future MPA area under the concentration-time curve from 0 to 12 hours (AUC0-12) in renal transplant recipients. Total MPA plasma concentrations were measured in 20 adult renal transplant patients on 2 occasions a week apart. All subjects received concomitant tacrolimus and were approximately 1 month after transplant. Samples were taken at 0, 0.33, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, and 8 hours and 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 6, 9, and 12 hours after dose on the first and second sampling occasion, respectively. Predicted MPA AUC0-12 was calculated using 19 published LSSs and data from the first or second sampling occasion for each patient and compared with the second occasion full MPA AUC0-12 calculated using the linear trapezoidal rule. Bias (median percentage prediction error) and imprecision (median absolute prediction error) were determined. Median percentage prediction error and median absolute prediction error for the prediction of full MPA AUC0-12 were multiple linear regression-based LSS was not possible without concentrations up to at least 8 hours after the dose.

  6. Dynamics of Time Delay-Induced Multiple Synchronous Behaviors in Inhibitory Coupled Neurons

    Science.gov (United States)

    Gu, Huaguang; Zhao, Zhiguo

    2015-01-01

    The inhibitory synapse can induce synchronous behaviors different from the anti-phase synchronous behaviors, which have been reported in recent studies. In the present paper, synchronous behaviors are investigated in the motif model composed of reciprocal inhibitory coupled neurons with endogenous bursting and time delay. When coupling strength is weak, synchronous behavior appears at a single interval of time delay within a bursting period. When coupling strength is strong, multiple synchronous behaviors appear at different intervals of time delay within a bursting period. The different bursting patterns of synchronous behaviors, and time delays and coupling strengths that can induce the synchronous bursting patterns can be well interpreted by the dynamics of the endogenous bursting pattern of isolated neuron, which is acquired by the fast-slow dissection method, combined with the inhibitory coupling current. For an isolated neuron, when a negative impulsive current with suitable strength is applied at different phases of the bursting, multiple different bursting patterns can be induced. For a neuron in the motif, the inhibitory coupling current, of which the application time and strength is modulated by time delay and coupling strength, can cause single or multiple synchronous firing patterns like the negative impulsive current when time delay and coupling strength is suitable. The difference compared to the previously reported multiple synchronous behaviors that appear at time delays wider than a period of the endogenous firing is discussed. The results present novel examples of synchronous behaviors in the neuronal network with inhibitory synapses and provide a reasonable explanation. PMID:26394224

  7. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  8. Spatial and Temporal Distribution of Multiple Cropping Indices in the North China Plain Using a Long Remote Sensing Data Time Series

    Directory of Open Access Journals (Sweden)

    Yan Zhao

    2016-04-01

    Full Text Available Multiple cropping provides China with a very important system of intensive cultivation, and can effectively enhance the efficiency of farmland use while improving regional food production and security. A multiple cropping index (MCI, which represents the intensity of multiple cropping and reflects the effects of climate change on agricultural production and cropping systems, often serves as a useful parameter. Therefore, monitoring the dynamic changes in the MCI of farmland over a large area using remote sensing data is essential. For this purpose, nearly 30 years of MCIs related to dry land in the North China Plain (NCP were efficiently extracted from remotely sensed leaf area index (LAI data from the Global LAnd Surface Satellite (GLASS. Next, the characteristics of the spatial-temporal change in MCI were analyzed. First, 2162 typical arable sample sites were selected based on a gridded spatial sampling strategy, and then the LAI information was extracted from the samples. Second, the Savizky-Golay filter was used to smooth the LAI time-series data of the samples, and then the MCIs of the samples were obtained using a second-order difference algorithm. Finally, the geo-statistical Kriging method was employed to map the spatial distribution of the MCIs and to obtain a time-series dataset of the MCIs of dry land over the NCP. The results showed that all of the MCIs in the NCP showed an increasing trend over the entire study period and increased most rapidly from 1982 to 2002. Spatially, MCIs decreased from south to north; also, high MCIs were mainly concentrated in the relatively flat areas. In addition, the partial spatial changes of MCIs had clear geographical characteristics, with the largest change in Henan Province.

  9. Analysis of multiple single nucleotide polymorphisms (SNP) on DNA traces from plasma and dried blood samples

    NARCIS (Netherlands)

    Catsburg, Arnold; van der Zwet, Wil C.; Morre, Servaas A.; Ouburg, Sander; Vandenbroucke-Grauls, Christina M. J. E.; Savelkoul, Paul H. M.

    2007-01-01

    Reliable analysis of single nucleotide polymorphisms (SNPs) in DNA derived from samples containing low numbers of cells or from suboptimal sources can be difficult. A new procedure to characterize multiple SNPs in traces of DNA from plasma and old dried blood samples was developed. Six SNPs in the

  10. A replica exchange transition interface sampling method with multiple interface sets for investigating networks of rare events

    Science.gov (United States)

    Swenson, David W. H.; Bolhuis, Peter G.

    2014-07-01

    The multiple state transition interface sampling (TIS) framework in principle allows the simulation of a large network of complex rare event transitions, but in practice suffers from convergence problems. To improve convergence, we combine multiple state TIS [J. Rogal and P. G. Bolhuis, J. Chem. Phys. 129, 224107 (2008)] with replica exchange TIS [T. S. van Erp, Phys. Rev. Lett. 98, 268301 (2007)]. In addition, we introduce multiple interface sets, which allow more than one order parameter to be defined for each state. We illustrate the methodology on a model system of multiple independent dimers, each with two states. For reaction networks with up to 64 microstates, we determine the kinetics in the microcanonical ensemble, and discuss the convergence properties of the sampling scheme. For this model, we find that the kinetics depend on the instantaneous composition of the system. We explain this dependence in terms of the system's potential and kinetic energy.

  11. Multiplicity counting from fission detector signals with time delay effects

    Science.gov (United States)

    Nagy, L.; Pázsit, I.; Pál, L.

    2018-03-01

    In recent work, we have developed the theory of using the first three auto- and joint central moments of the currents of up to three fission chambers to extract the singles, doubles and triples count rates of traditional multiplicity counting (Pázsit and Pál, 2016; Pázsit et al., 2016). The objective is to elaborate a method for determining the fissile mass, neutron multiplication, and (α, n) neutron emission rate of an unknown assembly of fissile material from the statistics of the fission chamber signals, analogous to the traditional multiplicity counting methods with detectors in the pulse mode. Such a method would be an alternative to He-3 detector systems, which would be free from the dead time problems that would be encountered in high counting rate applications, for example the assay of spent nuclear fuel. A significant restriction of our previous work was that all neutrons born in a source event (spontaneous fission) were assumed to be detected simultaneously, which is not fulfilled in reality. In the present work, this restriction is eliminated, by assuming an independent, identically distributed random time delay for all neutrons arising from one source event. Expressions are derived for the same auto- and joint central moments of the detector current(s) as in the previous case, expressed with the singles, doubles, and triples (S, D and T) count rates. It is shown that if the time-dispersion of neutron detections is of the same order of magnitude as the detector pulse width, as they typically are in measurements of fast neutrons, the multiplicity rates can still be extracted from the moments of the detector current, although with more involved calibration factors. The presented formulae, and hence also the performance of the proposed method, are tested by both analytical models of the time delay as well as with numerical simulations. Methods are suggested also for the modification of the method for large time delay effects (for thermalised neutrons).

  12. Timing of multiple overlapping intervals : How many clocks do we have?

    NARCIS (Netherlands)

    van Rijn, Hedderik; Taatgen, Niels A.

    2008-01-01

    Humans perceive and reproduce short intervals of time (e.g. 1-60 s) relatively accurately, and are capable of timing multiple overlapping intervals if these intervals are presented in different modalities [e.g., Rousseau, L., & Rousseau, RL (1996). Stop-reaction time and the internal clock.

  13. Two-dimensional phononic crystals with time-varying properties: a multiple scattering analysis

    International Nuclear Information System (INIS)

    Wright, D W; Cobbold, R S C

    2010-01-01

    Multiple scattering theory is a versatile two- and three-dimensional method for characterizing the acoustic wave transmission through many scatterers. It provides analytical solutions to wave propagation in scattering structures, and its computational complexity grows logarithmically with the number of scatterers. In this paper we show how the 2D method can be adapted to include the effects of time-varying material parameters. Specifically, a new T-matrix is defined to include the effects of frequency modulation that occurs in time-varying phononic crystals. Solutions were verified against finite difference time domain (FDTD) simulations and showed excellent agreement. This new method enables fast characterization of time-varying phononic crystals without the need to resort to lengthy FDTD simulations. Also, the method of combining T-matrices to form the T-supermatrix remains unchanged provided that the new matrix definitions are used. The method is quite compatible with existing implementations of multiple scattering theory and could be readily extended to three-dimensional multiple scattering theory

  14. Time-series analysis of multiple foreign exchange rates using time-dependent pattern entropy

    Science.gov (United States)

    Ishizaki, Ryuji; Inoue, Masayoshi

    2018-01-01

    Time-dependent pattern entropy is a method that reduces variations to binary symbolic dynamics and considers the pattern of symbols in a sliding temporal window. We use this method to analyze the instability of daily variations in multiple foreign exchange rates. The time-dependent pattern entropy of 7 foreign exchange rates (AUD/USD, CAD/USD, CHF/USD, EUR/USD, GBP/USD, JPY/USD, and NZD/USD) was found to be high in the long period after the Lehman shock, and be low in the long period after Mar 2012. We compared the correlation matrix between exchange rates in periods of high and low of the time-dependent pattern entropy.

  15. Identification of continuous-time systems from samples of input ...

    Indian Academy of Sciences (India)

    Abstract. This paper presents an introductory survey of the methods that have been developed for identification of continuous-time systems from samples of input±output data. The two basic approaches may be described as (i) the indirect method, where first a discrete-time model is estimated from the sampled data and then ...

  16. The potential of TaqMan Array Cards for detection of multiple biological agents by real-time PCR.

    Directory of Open Access Journals (Sweden)

    Phillip A Rachwal

    Full Text Available The TaqMan Array Card architecture, normally used for gene expression studies, was evaluated for its potential to detect multiple bacterial agents by real-time PCR. Ten PCR assays targeting five biological agents (Bacillus anthracis, Burkholderia mallei, Burkholderia pseudomallei, Francisella tularensis, and Yersinia pestis were incorporated onto Array Cards. A comparison of PCR performance of each PCR in Array Card and singleplex format was conducted using DNA extracted from pure bacterial cultures. When 100 fg of agent DNA was added to Array Card channels the following levels of agent detection (where at least one agent PCR replicate returned a positive result were observed: Y. pestis 100%, B. mallei & F. tularensis 93%; B. anthracis 71%; B. pseudomallei 43%. For B. mallei & pseudomallei detection the BPM2 PCR, which detects both species, outperformed PCR assays specific to each organism indicating identification of the respective species would not be reproducible at the 100 fg level. Near 100% levels of detection were observed when 100 fg of DNA was added to each PCR in singleplex format with singleplex PCRs also returning sporadic positives at the 10 fg per PCR level. Before evaluating the use of Array Cards for the testing of environmental and clinical sample types, with potential levels of background DNA and PCR inhibitors, users would therefore have to accept a 10-fold reduction in sensitivity of PCR assays on the Array Card format, in order to benefit for the capacity to test multiple samples for multiple agents. A two PCR per agent strategy would allow the testing of 7 samples for the presence of 11 biological agents or 3 samples for 23 biological agents per card (with negative control channels.

  17. Analytical characterization of high-level mixed wastes using multiple sample preparation treatments

    International Nuclear Information System (INIS)

    King, A.G.; Baldwin, D.L.; Urie, M.W.; McKinley, S.G.

    1994-01-01

    The Analytical Chemistry Laboratory at the Pacific Northwest Laboratory in Richland, Washington, is actively involved in performing analytical characterization of high-level mixed waste from Hanford's single shell and double shell tank characterization programs. A full suite of analyses is typically performed on homogenized tank core samples. These analytical techniques include inductively-coupled plasma-atomic emission spectroscopy, total organic carbon methods and radiochemistry methods, as well as many others, all requiring some type of remote sample-preparation treatment to solubilize the tank sludge material for analysis. Most of these analytical methods typically use a single sample-preparation treatment, inherently providing elemental information only. To better understand and interpret tank chemistry and assist in identifying chemical compounds, selected analytical methods are performed using multiple sample-preparation treatments. The sample preparation treatments used at Pacific Northwest Laboratory for this work with high-level mixed waste include caustic fusion, acid digestion, and water leach. The type of information available by comparing results from different sample-prep treatments includes evidence for the presence of refractory compounds, acid-soluble compounds, or water-soluble compounds. Problems unique to the analysis of Hanford tank wastes are discussed. Selected results from the Hanford single shell ferrocyanide tank, 241-C-109, are presented, and the resulting conclusions are discussed

  18. Multiple Time Series Ising Model for Financial Market Simulations

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2015-01-01

    In this paper we propose an Ising model which simulates multiple financial time series. Our model introduces the interaction which couples to spins of other systems. Simulations from our model show that time series exhibit the volatility clustering that is often observed in the real financial markets. Furthermore we also find non-zero cross correlations between the volatilities from our model. Thus our model can simulate stock markets where volatilities of stocks are mutually correlated

  19. Time-resolved photoelectron spectroscopy and ab initio multiple spawning studies of hexamethylcyclopentadiene

    DEFF Research Database (Denmark)

    Wolf, T. J. A.; Kuhlman, Thomas Scheby; Schalk, O.

    2014-01-01

    Time-resolved photoelectron spectroscopy and ab initio multiple spawning were applied to the ultrafast non-adiabatic dynamics of hexamethylcyclopentadiene. The high level of agreement between experiment and theory associates wavepacket motion with a distinct degree of freedom.......Time-resolved photoelectron spectroscopy and ab initio multiple spawning were applied to the ultrafast non-adiabatic dynamics of hexamethylcyclopentadiene. The high level of agreement between experiment and theory associates wavepacket motion with a distinct degree of freedom....

  20. Pico-litre Sample Introduction and Acoustic Levitation Systems for Time Resolved Protein Crystallography Experiments at XFELS

    Directory of Open Access Journals (Sweden)

    Peter Docker

    2017-07-01

    Full Text Available The system described in this work is a variant from traditional acoustic levitation first described by, Marzo et al. It uses multiple transducers eliminating the requirement for a mirror surface, allowing for an open geometry as the sound from multiple transducers combines to generate the acoustic trap which is configured to catch pico litres of crystal slurries. These acoustic traps also have the significant benefit of eliminating potential beam attenuation due to support structures or microfluidic devices. Additionally they meet the need to eliminate sample environments when experiments are carried out using an X-ray Free Electron Lasers (XFEL such as the Linac Coherent Light Source (LCLS as any sample environment would not survive the exposure to the X-Ray beam. XFELs generate Light a billion times brighter than the sun. The application for this system will be to examine turn over in Beta lactamase proteins which is responsible for bacteria developing antibiotic resistance and therefore of significant importance to future world health. The system will allow for diffraction data to be collected before and after turnover allowing for a better understanding of the underling processes. The authors first described this work at Nanotech 2017.

  1. Dual Source Time-of-flight Mass Spectrometer and Sample Handling System

    Science.gov (United States)

    Brinckerhoff, W.; Mahaffy, P.; Cornish, T.; Cheng, A.; Gorevan, S.; Niemann, H.; Harpold, D.; Rafeek, S.; Yucht, D.

    We present details of an instrument under development for potential NASA missions to planets and small bodies. The instrument comprises a dual ionization source (laser and electron impact) time-of-flight mass spectrometer (TOF-MS) and a carousel sam- ple handling system for in situ analysis of solid materials acquired by, e.g., a coring drill. This DSTOF instrument could be deployed on a fixed lander or a rover, and has an open design that would accommodate measurements by additional instruments. The sample handling system (SHS) is based on a multi-well carousel, originally de- signed for Champollion/DS4. Solid samples, in the form of drill cores or as loose chips or fines, are inserted through an access port, sealed in vacuum, and transported around the carousel to a pyrolysis cell and/or directly to the TOF-MS inlet. Samples at the TOF-MS inlet are xy-addressable for laser or optical microprobe. Cups may be ejected from their holders for analyzing multiple samples or caching them for return. Samples are analyzed with laser desorption and evolved-gas/electron-impact sources. The dual ion source permits studies of elemental, isotopic, and molecular composition of unprepared samples with a single mass spectrometer. Pulsed laser desorption per- mits the measurement of abundance and isotope ratios of refractory elements, as well as the detection of high-mass organic molecules in solid samples. Evolved gas analysis permits similar measurements of the more volatile species in solids and aerosols. The TOF-MS is based on previous miniature prototypes at JHU/APL that feature high sensitivity and a wide mass range. The laser mode, in which the sample cup is directly below the TOF-MS inlet, permits both ablation and desorption measurements, to cover elemental and molecular species, respectively. In the evolved gas mode, sample cups are raised into a small pyrolysis cell and heated, producing a neutral gas that is elec- tron ionized and pulsed into the TOF-MS. (Any imaging

  2. HMC algorithm with multiple time scale integration and mass preconditioning

    Science.gov (United States)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  3. Time division multiple access for vehicular communications

    CERN Document Server

    Omar, Hassan Aboubakr

    2014-01-01

    This brief focuses on medium access control (MAC) in vehicular ad hoc networks (VANETs), and presents VeMAC, a novel MAC scheme based on distributed time division multiple access (TDMA) for VANETs. The performance of VeMAC is evaluated via mathematical analysis and computer simulations in comparison with other existing MAC protocols, including the IEEE 802.11p standard. This brief aims at proposing TDMA as a suitable MAC scheme for VANETs, which can support the quality-of-service requirements of high priority VANET applications.

  4. Multiple surveys employing a new sample-processing protocol reveal the genetic diversity of placozoans in Japan.

    Science.gov (United States)

    Miyazawa, Hideyuki; Nakano, Hiroaki

    2018-03-01

    Placozoans, flat free-living marine invertebrates, possess an extremely simple bauplan lacking neurons and muscle cells and represent one of the earliest-branching metazoan phyla. They are widely distributed from temperate to tropical oceans. Based on mitochondrial 16S rRNA sequences, 19 haplotypes forming seven distinct clades have been reported in placozoans to date. In Japan, placozoans have been found at nine locations, but 16S genotyping has been performed at only two of these locations. Here, we propose a new processing protocol, "ethanol-treated substrate sampling," for collecting placozoans from natural environments. We also report the collection of placozoans from three new locations, the islands of Shikine-jima, Chichi-jima, and Haha-jima, and we present the distribution of the 16S haplotypes of placozoans in Japan. Multiple surveys conducted at multiple locations yielded five haplotypes that were not reported previously, revealing high genetic diversity in Japan, especially at Shimoda and Shikine-jima Island. The observed geographic distribution patterns were different among haplotypes; some were widely distributed, while others were sampled only from a single location. However, samplings conducted on different dates at the same sites yielded different haplotypes, suggesting that placozoans of a given haplotype do not inhabit the same site constantly throughout the year. Continued sampling efforts conducted during all seasons at multiple locations worldwide and the development of molecular markers within the haplotypes are needed to reveal the geographic distribution pattern and dispersal history of placozoans in greater detail.

  5. Algae viability over time in a ballast water sample

    Science.gov (United States)

    Gollasch, Stephan; David, Matej

    2018-03-01

    The biology of vessels' ballast water needs to be analysed for several reasons, one of these being performance tests of ballast water management systems. This analysis includes a viability assessment of phytoplankton. To overcome logistical problems to get algae sample processing gear on board of a vessel to document algae viability, samples may be transported to land-based laboratories. Concerns were raised how the storage conditions of the sample may impact algae viability over time and what the most appropriate storage conditions were. Here we answer these questions with a long-term algae viability study with daily sample analysis using Pulse-Amplitude Modulated (PAM) fluorometry. The sample was analysed over 79 days. We tested different storage conditions: fridge and room temperature with and without light. It seems that during the first two weeks of the experiment the viability remains almost unchanged with a slight downwards trend. In the continuing period, before the sample was split, a slightly stronger downwards viability trend was observed, which occurred at a similar rate towards the end of the experiment. After the sample was split, the strongest viability reduction was measured for the sample stored without light at room temperature. We concluded that the storage conditions, especially regarding temperature and light exposure, have a stronger impact on algae viability compared to the storage duration and that inappropriate storage conditions reduce algal viability. A sample storage time of up to two weeks in a dark and cool environment has little influence on the organism viability. This indicates that a two week time duration between sample taking on board a vessel and the viability measurement in a land-based laboratory may not be very critical.

  6. Multiple sample, radioactive particle counting apparatus

    International Nuclear Information System (INIS)

    Reddy, R.R.V.; Kelso, D.M.

    1978-01-01

    An apparatus is described for determining the respective radioactive particle sample count being emitted from radioactive particle containing samples. It includes means for modulating the information on the radioactive particles being emitted from the samples, coded detecting means for sequentially detecting different respective coded combinations of the radioactive particles emitted from more than one but less than all of the samples, and processing the modulated information to derive the sample count for each sample. It includes a single light emitting crystal next to a number of samples, an encoder belt sequentially movable between the crystal and the samples. The encoder belt has a coded array of apertures to provide corresponding modulated light pulses from the crystal, and a photomultiplier tube to convert the modulated light pulses to decodable electrical signals for deriving the respective sample count

  7. Systematic approach to optimize a pretreatment method for ultrasensitive liquid chromatography with tandem mass spectrometry analysis of multiple target compounds in biological samples.

    Science.gov (United States)

    Togashi, Kazutaka; Mutaguchi, Kuninori; Komuro, Setsuko; Kataoka, Makoto; Yamazaki, Hiroshi; Yamashita, Shinji

    2016-08-01

    In current approaches for new drug development, highly sensitive and robust analytical methods for the determination of test compounds in biological samples are essential. These analytical methods should be optimized for every target compound. However, for biological samples that contain multiple compounds as new drug candidates obtained by cassette dosing tests, it would be preferable to develop a single method that allows the determination of all compounds at once. This study aims to establish a systematic approach that enables a selection of the most appropriate pretreatment method for multiple target compounds without the use of their chemical information. We investigated the retention times of 27 known compounds under different mobile phase conditions and determined the required pretreatment of human plasma samples using several solid-phase and liquid-liquid extractions. From the relationship between retention time and recovery in a principal component analysis, appropriate pretreatments were categorized into several types. Based on the category, we have optimized a pretreatment method for the identification of three calcium channel blockers in human plasma. Plasma concentrations of these drugs in a cassette-dose clinical study at microdose level were successfully determined with a lower limit of quantitation of 0.2 pg/mL for diltiazem, 1 pg/mL for nicardipine, and 2 pg/mL for nifedipine. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Multiple category-lot quality assurance sampling: a new classification system with application to schistosomiasis control.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Brooker, Simon J; Pagano, Marcello

    2012-01-01

    Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.

  9. On the sample transport time of a pneumatic transfer system

    International Nuclear Information System (INIS)

    Kondo, Yoshihide

    1983-01-01

    The counts accumulated in measuring system are affected by the variations in transport time of the sample on cyclic activation experiments with a mechanical sample transfer system. In use of the pneumatic transfer system, which has been set up, the transport time is variable according to the differences as follows: The form, size and weight of samples, the pneumatic pressure and so on. Comprehending the relationships between the transpot time and these variable factors is essentially important to make experiments with this transfer system. (author)

  10. Noise-induced coherence in bistable systems with multiple time delays

    International Nuclear Information System (INIS)

    Jiang Yu; Dong, Shi-Hai; Lozada-Cassou, M.

    2004-01-01

    We study the correlation properties of noise-driven bistable systems with multiple time-delay feedbacks. For small noisy perturbation and feedback magnitude, we derive the autocorrelation function and the power spectrum based on the two-state model with transition rates depending on the earlier states of the system. A comparison between the single and double time delays reveals that the auto correlation functions exhibit exponential decay with small undulation for the double time delays, in contrast with the remarkable oscillatory behavior at small time lags for the single time delay

  11. Examining the Interplay of Processes Across Multiple Time-Scales: Illustration With the Intraindividual Study of Affect, Health, and Interpersonal Behavior (iSAHIB).

    Science.gov (United States)

    Ram, Nilam; Conroy, David E; Pincus, Aaron L; Lorek, Amy; Rebar, Amanda; Roche, Michael J; Coccia, Michael; Morack, Jennifer; Feldman, Josh; Gerstorf, Denis

    Human development is characterized by the complex interplay of processes that manifest at multiple levels of analysis and time-scales. We introduce the Intraindividual Study of Affect, Health and Interpersonal Behavior (iSAHIB) as a model for how multiple time-scale study designs facilitate more precise articulation of developmental theory. Combining age heterogeneity, longitudinal panel, daily diary, and experience sampling protocols, the study made use of smartphone and web-based technologies to obtain intensive longitudinal data from 150 persons age 18-89 years as they completed three 21-day measurement bursts ( t = 426 bursts, t = 8,557 days) wherein they provided reports on their social interactions ( t = 64,112) as they went about their daily lives. We illustrate how multiple time-scales of data can be used to articulate bioecological models of development and the interplay among more 'distal' processes that manifest at 'slower' time-scales (e.g., age-related differences and burst-to-burst changes in mental health) and more 'proximal' processes that manifest at 'faster' time-scales (e.g., changes in context that progress in accordance with the weekly calendar and family influence processes).

  12. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon C. [Arizona State Univ., Tempe, AZ (United States); Brooks, Dusty Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sallaberry, Cedric Jean-Marie. [Engineering Mechanics Corp. of Columbus, OH (United States)

    2018-02-01

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property value at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.

  13. New time-saving predictor algorithm for multiple breath washout in adolescents

    DEFF Research Database (Denmark)

    Grønbæk, Jonathan; Hallas, Henrik Wegener; Arianto, Lambang

    2016-01-01

    BACKGROUND: Multiple breath washout (MBW) is an informative but time-consuming test. This study evaluates the uncertainty of a time-saving predictor algorithm in adolescents. METHODS: Adolescents were recruited from the Copenhagen Prospective Study on Asthma in Childhood (COPSAC2000) birth cohort...

  14. Multiple double cross-section transmission electron microscope sample preparation of specific sub-10 nm diameter Si nanowire devices.

    Science.gov (United States)

    Gignac, Lynne M; Mittal, Surbhi; Bangsaruntip, Sarunya; Cohen, Guy M; Sleight, Jeffrey W

    2011-12-01

    The ability to prepare multiple cross-section transmission electron microscope (XTEM) samples from one XTEM sample of specific sub-10 nm features was demonstrated. Sub-10 nm diameter Si nanowire (NW) devices were initially cross-sectioned using a dual-beam focused ion beam system in a direction running parallel to the device channel. From this XTEM sample, both low- and high-resolution transmission electron microscope (TEM) images were obtained from six separate, specific site Si NW devices. The XTEM sample was then re-sectioned in four separate locations in a direction perpendicular to the device channel: 90° from the original XTEM sample direction. Three of the four XTEM samples were successfully sectioned in the gate region of the device. From these three samples, low- and high-resolution TEM images of the Si NW were taken and measurements of the NW diameters were obtained. This technique demonstrated the ability to obtain high-resolution TEM images in directions 90° from one another of multiple, specific sub-10 nm features that were spaced 1.1 μm apart.

  15. Adaptive control of theophylline therapy: importance of blood sampling times.

    Science.gov (United States)

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  16. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    Science.gov (United States)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and

  17. Simultaneous real-time monitoring of multiple cortical systems.

    Science.gov (United States)

    Gupta, Disha; Jeremy Hill, N; Brunner, Peter; Gunduz, Aysegul; Ritaccio, Anthony L; Schalk, Gerwin

    2014-10-01

    Real-time monitoring of the brain is potentially valuable for performance monitoring, communication, training or rehabilitation. In natural situations, the brain performs a complex mix of various sensory, motor or cognitive functions. Thus, real-time brain monitoring would be most valuable if (a) it could decode information from multiple brain systems simultaneously, and (b) this decoding of each brain system were robust to variations in the activity of other (unrelated) brain systems. Previous studies showed that it is possible to decode some information from different brain systems in retrospect and/or in isolation. In our study, we set out to determine whether it is possible to simultaneously decode important information about a user from different brain systems in real time, and to evaluate the impact of concurrent activity in different brain systems on decoding performance. We study these questions using electrocorticographic signals recorded in humans. We first document procedures for generating stable decoding models given little training data, and then report their use for offline and for real-time decoding from 12 subjects (six for offline parameter optimization, six for online experimentation). The subjects engage in tasks that involve movement intention, movement execution and auditory functions, separately, and then simultaneously. Main Results: Our real-time results demonstrate that our system can identify intention and movement periods in single trials with an accuracy of 80.4% and 86.8%, respectively (where 50% would be expected by chance). Simultaneously, the decoding of the power envelope of an auditory stimulus resulted in an average correlation coefficient of 0.37 between the actual and decoded power envelopes. These decoders were trained separately and executed simultaneously in real time. This study yielded the first demonstration that it is possible to decode simultaneously the functional activity of multiple independent brain systems. Our

  18. Elevated body temperature is linked to fatigue in an Italian sample of relapsing-remitting multiple sclerosis patients.

    Science.gov (United States)

    Leavitt, V M; De Meo, E; Riccitelli, G; Rocca, M A; Comi, G; Filippi, M; Sumowski, J F

    2015-11-01

    Elevated body temperature was recently reported for the first time in patients with relapsing-remitting multiple sclerosis (RRMS) relative to healthy controls. In addition, warmer body temperature was associated with worse fatigue. These findings are highly novel, may indicate a novel pathophysiology for MS fatigue, and therefore warrant replication in a geographically separate sample. Here, we investigated body temperature and its association to fatigue in an Italian sample of 44 RRMS patients and 44 age- and sex-matched healthy controls. Consistent with our original report, we found elevated body temperature in the RRMS sample compared to healthy controls. Warmer body temperature was associated with worse fatigue, thereby supporting the notion of endogenous temperature elevations in patients with RRMS as a novel pathophysiological factor underlying fatigue. Our findings highlight a paradigm shift in our understanding of the effect of heat in RRMS, from exogenous (i.e., Uhthoff's phenomenon) to endogenous. Although randomized controlled trials of cooling treatments (i.e., aspirin, cooling garments) to reduce fatigue in RRMS have been successful, consideration of endogenously elevated body temperature as the underlying target will enhance our development of novel treatments.

  19. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  20. A New Attribute Control Chart using Multiple Dependent State Repetitive Sampling

    KAUST Repository

    Aldosari, Mansour Sattam; Aslam, Muhammad; Jun, Chi-Hyuck

    2017-01-01

    In this manuscript, a new attribute control chart using multiple dependent state repetitive sampling is designed. The operational procedure and structure of the proposed control chart is given. The required measures to determine the average run length (ARL) for in-control and out-of-control processes are given. Tables of ARLs are reported for various control chart parameters. The proposed control chart is more sensitive in detecting a small shift in the process as compared to the existing attribute control charts. The simulation study shows the efficiency of the proposed chart over the existing charts. An example is given for the illustration purpose.

  1. A New Attribute Control Chart using Multiple Dependent State Repetitive Sampling

    KAUST Repository

    Aldosari, Mansour Sattam

    2017-03-25

    In this manuscript, a new attribute control chart using multiple dependent state repetitive sampling is designed. The operational procedure and structure of the proposed control chart is given. The required measures to determine the average run length (ARL) for in-control and out-of-control processes are given. Tables of ARLs are reported for various control chart parameters. The proposed control chart is more sensitive in detecting a small shift in the process as compared to the existing attribute control charts. The simulation study shows the efficiency of the proposed chart over the existing charts. An example is given for the illustration purpose.

  2. Time-localized wavelet multiple regression and correlation

    Science.gov (United States)

    Fernández-Macho, Javier

    2018-02-01

    This paper extends wavelet methodology to handle comovement dynamics of multivariate time series via moving weighted regression on wavelet coefficients. The concept of wavelet local multiple correlation is used to produce one single set of multiscale correlations along time, in contrast with the large number of wavelet correlation maps that need to be compared when using standard pairwise wavelet correlations with rolling windows. Also, the spectral properties of weight functions are investigated and it is argued that some common time windows, such as the usual rectangular rolling window, are not satisfactory on these grounds. The method is illustrated with a multiscale analysis of the comovements of Eurozone stock markets during this century. It is shown how the evolution of the correlation structure in these markets has been far from homogeneous both along time and across timescales featuring an acute divide across timescales at about the quarterly scale. At longer scales, evidence from the long-term correlation structure can be interpreted as stable perfect integration among Euro stock markets. On the other hand, at intramonth and intraweek scales, the short-term correlation structure has been clearly evolving along time, experiencing a sharp increase during financial crises which may be interpreted as evidence of financial 'contagion'.

  3. Method for Hot Real-Time Sampling of Gasification Products

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beam Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.

  4. THE USE OF MULTIPLE DISPLACEMENT AMPLIFICATION TO INCREASE THE DETECTION AND GENOTYPING OF TRYPANOSOMA SPECIES SAMPLES IMMOBILISED ON FTA FILTERS

    Science.gov (United States)

    MORRISON, LIAM J.; McCORMACK, GILLIAN; SWEENEY, LINDSAY; LIKEUFACK, ANNE C. L.; TRUC, PHILIPPE; TURNER, C. MICHAEL; TAIT, ANDY; MacLEOD, ANNETTE

    2007-01-01

    Whole genome amplification methods are a recently developed tool for amplifying DNA from limited template. We report its application in trypanosome infections, characterised by low parasitaemias. Multiple Displacement Amplification (MDA) amplifies DNA with a simple in vitro step, and was evaluated on mouse blood samples on FTA filter cards with known numbers of Trypanosoma brucei parasites. The data showed a twenty-fold increase in the number of PCRs possible per sample, using primers diagnostic for the multi-copy ribosomal ITS region or 177 bp repeats, and a twenty-fold increase in sensitivity over nested PCR against a single copy microsatellite. Using MDA for microsatellite genotyping caused allele dropout at low DNA concentrations, which was overcome by pooling multiple MDA reactions. The validity of using MDA was established with samples from Human African Trypanosomiasis patients. The use of MDA allows maximal use of finite DNA samples and may prove a valuable tool in studies where multiple reactions are necessary, such as population genetic analyses. PMID:17556624

  5. A latent class multiple constraint multiple discrete-continuous extreme value model of time use and goods consumption.

    Science.gov (United States)

    2016-06-01

    This paper develops a microeconomic theory-based multiple discrete continuous choice model that considers: (a) that both goods consumption and time allocations (to work and non-work activities) enter separately as decision variables in the utility fu...

  6. Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.

    Science.gov (United States)

    Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2018-01-24

    Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.

  7. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. MZDASoft: a software architecture that enables large-scale comparison of protein expression levels over multiple samples based on liquid chromatography/tandem mass spectrometry.

    Science.gov (United States)

    Ghanat Bari, Mehrab; Ramirez, Nelson; Wang, Zhiwei; Zhang, Jianqiu Michelle

    2015-10-15

    Without accurate peak linking/alignment, only the expression levels of a small percentage of proteins can be compared across multiple samples in Liquid Chromatography/Mass Spectrometry/Tandem Mass Spectrometry (LC/MS/MS) due to the selective nature of tandem MS peptide identification. This greatly hampers biomedical research that aims at finding biomarkers for disease diagnosis, treatment, and the understanding of disease mechanisms. A recent algorithm, PeakLink, has allowed the accurate linking of LC/MS peaks without tandem MS identifications to their corresponding ones with identifications across multiple samples collected from different instruments, tissues and labs, which greatly enhanced the ability of comparing proteins. However, PeakLink cannot be implemented practically for large numbers of samples based on existing software architectures, because it requires access to peak elution profiles from multiple LC/MS/MS samples simultaneously. We propose a new architecture based on parallel processing, which extracts LC/MS peak features, and saves them in database files to enable the implementation of PeakLink for multiple samples. The software has been deployed in High-Performance Computing (HPC) environments. The core part of the software, MZDASoft Parallel Peak Extractor (PPE), can be downloaded with a user and developer's guide, and it can be run on HPC centers directly. The quantification applications, MZDASoft TandemQuant and MZDASoft PeakLink, are written in Matlab, which are compiled with a Matlab runtime compiler. A sample script that incorporates all necessary processing steps of MZDASoft for LC/MS/MS quantification in a parallel processing environment is available. The project webpage is http://compgenomics.utsa.edu/zgroup/MZDASoft. The proposed architecture enables the implementation of PeakLink for multiple samples. Significantly more (100%-500%) proteins can be compared over multiple samples with better quantification accuracy in test cases. MZDASoft

  9. Analysis of neutron multiplicity measurements with allowance for dead-time losses between time-correlated detections

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1992-01-01

    An exact solution is found for dead-time losses between detections occurring within a gate interval, with constant dead time and with allowance for time correlation between detections from the same spontaneous initial event. This is used to obtain a close approximation to the losses with a multi-channel detection system, with allowance for dead times briding the gate opening. This is applied, inversely, to calculate the true detection multiplicity rates from the distribution of the recorded counts within that interval. A suggestion is made for a circuit change to give a major reduction in dead-time effects. The unavoidable statistical errors that would remain are calculated. Their minimization and the limits of such minimization are discussed. (orig.)

  10. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  11. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  12. A Multiplexed Assay That Monitors Effects of Multiple Compound Treatment Times Reveals Candidate Immune-Enhancing Compounds.

    Science.gov (United States)

    Zhao, Ziyan; Henowitz, Liza; Zweifach, Adam

    2018-05-01

    We previously developed a flow cytometry assay that monitored lytic granule exocytosis in cytotoxic T lymphocytes stimulated by contacting beads coated with activating anti-CD3 antibodies. That assay was multiplexed in that responses of cells that did or did not receive the activating stimulus were distinguished via changes in light scatter accompanying binding of cells to beads, allowing us to discriminate compounds that activate responses on their own from compounds that enhance responses in cells that received the activating stimulus, all within a single sample. Here we add a second dimension of multiplexing by developing means to assess in a single sample the effects of treating cells with test compounds for different times. Bar-coding cells before adding them to test wells lets us determine compound treatment time while also monitoring activation status and response amplitude at the point of interrogation. This multiplexed assay is suitable for screening 96-well plates. We used it to screen compounds from the National Cancer Institute, identifying several compounds that enhance anti-LAMP1 responses. Multiple-treatment-time (MTT) screening enabled by bar-coding and read via high-throughput flow cytometry may be a generally useful method for facilitating the discovery of compounds of interest.

  13. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  14. Sampling and Timing: A Task for the Environmetal Process

    NARCIS (Netherlands)

    Hilderink, G.H.; Broenink, Johannes F.

    2003-01-01

    Sampling and timing is considered a responsibility of the environment of controller software. In this paper we will illustrate a concept whereby an environmental process and multi-way events play an important role in applying timing for untimed CSP software architectures. We use this timing concept

  15. Symmetry relationships for multiple scattering of polarized light in turbid spherical samples: theory and a Monte Carlo simulation.

    Science.gov (United States)

    Otsuki, Soichi

    2016-02-01

    This paper presents a theory describing totally incoherent multiple scattering of turbid spherical samples. It is proved that if reciprocity and mirror symmetry hold for single scattering by a particle, they also hold for multiple scattering in spherical samples. Monte Carlo simulations generate a reduced effective scattering Mueller matrix, which virtually satisfies reciprocity and mirror symmetry. The scattering matrix was factorized by using the symmetric decomposition in a predefined form, as well as the Lu-Chipman polar decomposition, approximately into a product of a pure depolarizer and vertically oriented linear retarding diattenuators. The parameters of these components were calculated as a function of the polar angle. While the turbid spherical sample is a pure depolarizer at low polar angles, it obtains more functions of the retarding diattenuator with increasing polar angle.

  16. Detection of Strongylus vulgaris in equine faecal samples by real-time PCR and larval culture - method comparison and occurrence assessment.

    Science.gov (United States)

    Kaspar, A; Pfister, K; Nielsen, M K; Silaghi, C; Fink, H; Scheuerle, M C

    2017-01-11

    Strongylus vulgaris has become a rare parasite in Germany during the past 50 years due to the practice of frequent prophylactic anthelmintic therapy. To date, the emerging development of resistance in Cyathostominae and Parascaris spp. to numerous equine anthelmintics has changed deworming management and the frequency of anthelmintic usage. In this regard, reliable detection of parasitic infections, especially of the highly pathogenic S. vulgaris is essential. In the current study, two diagnostic methods for the detection of infections with S. vulgaris were compared and information on the occurrence of this parasite in German horses was gained. For this purpose, faecal samples of 501 horses were screened for S. vulgaris with real-time PCR and an additional larval culture was performed in samples of 278 horses. A subset of 26 horses underwent multiple follow-up examinations with both methods in order to evaluate both the persistence of S. vulgaris infections and the reproducibility of each diagnostic method. The real-time PCR revealed S. vulgaris-DNA in ten of 501 investigated equine samples (1.9%). The larval culture demonstrated larvae of S. vulgaris in three of the 278 samples (1.1%). A direct comparison of the two methods was possible in 321 samples including 43 follow-up examinations with the result of 11 S. vulgaris-positive samples by real-time PCR and 4 S. vulgaris-positive samples by larval culture. The McNemar's test (p-value = 0.016) revealed a significant difference and the kappa values (0.525) showed a moderate agreement between real-time PCR and larval culture. The real-time PCR detected a significantly higher proportion of positives of S. vulgaris compared to larval culture and should thus be considered as a routine diagnostic method for the detection of S. vulgaris in equine samples.

  17. The Effect of Cell Phone Conversation on Drivers’ Reaction Time to Audio Stimulus: Investigating the Theory of Multiple Resources and Central Resource of Attention

    Directory of Open Access Journals (Sweden)

    Seyed Kazem Mousavi-Sadati

    2011-01-01

    Full Text Available Objective: This research was aimed at investigating the theory of multiple resources and central resource of attention on secondary task performance of talking with two types of cell phone during driving. Materials & Methods: Using disposal sampling, 25 male participants were selected and their reaction to auditory stimulus in three different driving conditions (no conversation with phone, conversation with handheld phone and hands-free phone were recorded. Driving conditions have been changed from a participant to another participant in order to control the sequence of tests and participants familiarity with the test conditions. Results: the results of data analysis with descriptive statistics and Mauchly’s Test of Sphericity, One- factor repeated measures ANOVA and Paired-Samples T test showed that different driving conditions can affect the reaction time (P0.001. Phone Conversation with hands-free phone increases drivers’ simple reaction time to auditory stimulus (P<0.001. Using handheld phone does not increase drivers’ reaction time to auditory stimulus over hands-free phone (P<0.001. Conclusion: The results confirmed that the performance quality of dual tasks and multiple tasks can be predicted by Four-dimensional multiple resources model of attention and all traffic laws in connection with the handheld phone also have to be spread to the use of hands-free phone.

  18. Continuous sampling from distributed streams

    DEFF Research Database (Denmark)

    Graham, Cormode; Muthukrishnan, S.; Yi, Ke

    2012-01-01

    A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distribu......A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple...... distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low...... for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most...

  19. Target Tracking of a Linear Time Invariant System under Irregular Sampling

    Directory of Open Access Journals (Sweden)

    Jin Xue-Bo

    2012-11-01

    Full Text Available Due to event-triggered sampling in a system, or maybe with the aim of reducing data storage, tracking many applications will encounter irregular sampling time. By calculating the matrix exponential using an inverse Laplace transform, this paper transforms the irregular sampling tracking problem to the problem of tracking with time-varying parameters of a system. Using the common Kalman filter, the developed method is used to track a target for the simulated trajectory and video tracking. The results of simulation experiments have shown that it can obtain good estimation performance even at a very high irregular rate of measurement sampling time.

  20. Monitoring and identification of spatiotemporal landscape changes in multiple remote sensing images by using a stratified conditional Latin hypercube sampling approach and geostatistical simulation.

    Science.gov (United States)

    Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh

    2011-06-01

    This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.

  1. Human mixed lymphocyte cultures. Evaluation of microculture technique utilizing the multiple automated sample harvester (MASH)

    Science.gov (United States)

    Thurman, G. B.; Strong, D. M.; Ahmed, A.; Green, S. S.; Sell, K. W.; Hartzman, R. J.; Bach, F. H.

    1973-01-01

    Use of lymphocyte cultures for in vitro studies such as pretransplant histocompatibility testing has established the need for standardization of this technique. A microculture technique has been developed that has facilitated the culturing of lymphocytes and increased the quantity of cultures feasible, while lowering the variation between replicate samples. Cultures were prepared for determination of tritiated thymidine incorporation using a Multiple Automated Sample Harvester (MASH). Using this system, the parameters that influence the in vitro responsiveness of human lymphocytes to allogeneic lymphocytes have been investigated. PMID:4271568

  2. On the average complexity of sphere decoding in lattice space-time coded multiple-input multiple-output channel

    KAUST Repository

    Abediseid, Walid

    2012-12-21

    The exact average complexity analysis of the basic sphere decoder for general space-time codes applied to multiple-input multiple-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder\\'s computational complexity. We show that when the computational complexity exceeds a certain limit, this upper bound becomes dominated by the outage probability achieved by LAST coding and sphere decoding schemes. We then calculate the minimum average computational complexity that is required by the decoder to achieve near optimal performance in terms of the system parameters. Our results indicate that there exists a cut-off rate (multiplexing gain) for which the average complexity remains bounded. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Psychosocial risks associated with multiple births resulting from assisted reproduction: a Spanish sample.

    Science.gov (United States)

    Roca de Bes, Montserrat; Gutierrez Maldonado, José; Gris Martínez, José M

    2009-09-01

    To determine the psychosocial risks associated with multiple births (twins or triplets) resulting from assisted reproductive technology (ART). Transverse study. Infertility units of a university hospital and a private hospital. Mothers and fathers of children between 6 months and 4 years conceived by ART (n = 123). The sample was divided into three groups: parents of singletons (n = 77), twins (n = 37), and triplets (n = 9). The questionnaire was self-administered by patients. It was either completed at the hospital or mailed to participants' homes. Scales measured material needs, quality of life, social stigma, depression, stress, and marital satisfaction. Logistic regression models were applied. Significant odds ratios were obtained for the number of children, material needs, social stigma, quality of life, and marital satisfaction. The results were more significant for data provided by mothers than by fathers. The informed consent form handed out at the beginning of ART should include information on the high risk of conceiving twins and triplets and on the possible psychosocial consequences of multiple births. As soon as a multiple pregnancy is confirmed, it would be useful to provide information on support groups and institutions. Psychological advice should also be given to the parents.

  4. Multiple-reflection time-of-flight mass spectrometry for in situ applications

    International Nuclear Information System (INIS)

    Dickel, T.; Plaß, W.R.; Lang, J.; Ebert, J.; Geissel, H.; Haettner, E.; Jesch, C.; Lippert, W.; Petrick, M.; Scheidenberger, C.; Yavor, M.I.

    2013-01-01

    Highlights: • MR-TOF-MS: huge potential in chemistry, medicine, space science, homeland security. • Compact MR-TOF-MS (length ∼30 cm) with very high mass resolving powers (10 5 ). • Combination of high resolving power (>10 5 ), mobility, API for in situ measurements. • Envisaged applications of mobile MR-TOF-MS. -- Abstract: Multiple-reflection time-of-flight mass spectrometers (MR-TOF-MS) have recently been installed at different low-energy radioactive ion beam facilities. They are used as isobar separators with high ion capacity and as mass spectrometers with high mass resolving power and accuracy for short-lived nuclei. Furthermore, MR-TOF-MS have a huge potential for applications in other fields, such as chemistry, biology, medicine, space science, and homeland security. The development, commissioning and results of an MR-TOF-MS is presented, which serves as proof-of-principle to show that very high mass resolving powers (∼10 5 ) can be achieved in a compact device (length ∼30 cm). Based on this work, an MR-TOF-MS for in situ application has been designed. For the first time, this device combines very high mass resolving power (>10 5 ), mobility, and an atmospheric pressure inlet in one instrument. It will enable in situ measurements without sample preparation at very high mass accuracy. Envisaged applications of this mobile MR-TOF-MS are discussed

  5. Multiple-reflection time-of-flight mass spectrometry for in situ applications

    Energy Technology Data Exchange (ETDEWEB)

    Dickel, T., E-mail: t.dickel@gsi.de [II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392 Gießen (Germany); GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); Plaß, W.R. [II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392 Gießen (Germany); GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); Lang, J.; Ebert, J. [II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392 Gießen (Germany); Geissel, H.; Haettner, E. [II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392 Gießen (Germany); GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); Jesch, C.; Lippert, W.; Petrick, M. [II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392 Gießen (Germany); Scheidenberger, C. [II. Physikalisches Institut, Justus-Liebig-Universität Gießen, 35392 Gießen (Germany); GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstraße 1, 64291 Darmstadt (Germany); Yavor, M.I. [Institute for Analytical Instrumentation, Russian Academy of Sciences, 190103 St. Petersburg (Russian Federation)

    2013-12-15

    Highlights: • MR-TOF-MS: huge potential in chemistry, medicine, space science, homeland security. • Compact MR-TOF-MS (length ∼30 cm) with very high mass resolving powers (10{sup 5}). • Combination of high resolving power (>10{sup 5}), mobility, API for in situ measurements. • Envisaged applications of mobile MR-TOF-MS. -- Abstract: Multiple-reflection time-of-flight mass spectrometers (MR-TOF-MS) have recently been installed at different low-energy radioactive ion beam facilities. They are used as isobar separators with high ion capacity and as mass spectrometers with high mass resolving power and accuracy for short-lived nuclei. Furthermore, MR-TOF-MS have a huge potential for applications in other fields, such as chemistry, biology, medicine, space science, and homeland security. The development, commissioning and results of an MR-TOF-MS is presented, which serves as proof-of-principle to show that very high mass resolving powers (∼10{sup 5}) can be achieved in a compact device (length ∼30 cm). Based on this work, an MR-TOF-MS for in situ application has been designed. For the first time, this device combines very high mass resolving power (>10{sup 5}), mobility, and an atmospheric pressure inlet in one instrument. It will enable in situ measurements without sample preparation at very high mass accuracy. Envisaged applications of this mobile MR-TOF-MS are discussed.

  6. Real time operation of a multiple gamma measurement installation

    International Nuclear Information System (INIS)

    Philippot, J.C.; Lefevre, J.

    1980-01-01

    This paper describes a multiple measurement channel facility for fine gamma spectrometry, its real time operation, and the new possibilities which it offers. The installation is presented in its twofold electronic and processing aspects, by considering its architecture, its hard and software, and its data processing package. Real time operation requires customized general organization, perfect instantaneous knowledge of the status of all the units, and a sound hierarchy between the various participants, operators as well as requestors. The care inherent in the installation itself and in the definition of its operation explains its new possibilities. (Auth.)

  7. A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H

    2012-01-01

    A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)

  8. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  9. Non-Cartesian MRI scan time reduction through sparse sampling

    NARCIS (Netherlands)

    Wajer, F.T.A.W.

    2001-01-01

    Non-Cartesian MRI Scan-Time Reduction through Sparse Sampling Magnetic resonance imaging (MRI) signals are measured in the Fourier domain, also called k-space. Samples of the MRI signal can not be taken at will, but lie along k-space trajectories determined by the magnetic field gradients. MRI

  10. DEP-On-Go for Simultaneous Sensing of Multiple Heavy Metals Pollutants in Environmental Samples

    Directory of Open Access Journals (Sweden)

    Madhu Biyani

    2016-12-01

    Full Text Available We describe a simple and affordable “Disposable electrode printed (DEP-On-Go” sensing platform for the rapid on-site monitoring of trace heavy metal pollutants in environmental samples for early warning by developing a mobile electrochemical device composed of palm-sized potentiostat and disposable unmodified screen-printed electrode chips. We present the analytical performance of our device for the sensitive detection of major heavy metal ions, namely, mercury, cadmium, lead, arsenic, zinc, and copper with detection limits of 1.5, 2.6, 4.0, 5.0, 14.4, and, 15.5 μg·L−1, respectively. Importantly, the utility of this device is extended to detect multiple heavy metals simultaneously with well-defined voltammograms and similar sensitivity. Finally, “DEP-On-Go” was successfully applied to detect heavy metals in real environmental samples from groundwater, tap water, house dust, soil, and industry-processed rice and noodle foods. We evaluated the efficiency of this system with a linear correlation through inductively coupled plasma mass spectrometry, and the results suggested that this system can be reliable for on-site screening purposes. On-field applications using real samples of groundwater for drinking in the northern parts of India support the easy-to-detect, low-cost (<1 USD, rapid (within 5 min, and reliable detection limit (ppb levels performance of our device for the on-site detection and monitoring of multiple heavy metals in resource-limited settings.

  11. Simultaneous presence of multiple Campylobacter species in dogs

    NARCIS (Netherlands)

    Koene, M.G.J.; Houwers, D.J.; Dijkstra, J.R.; Duim, B.; Wagenaar, J.A.

    2004-01-01

    The prevalence of coinfection of Campylobacter species in dogs was determined using four isolation methods. In 26% of the positive-testing stools, multiple Campylobacter species were identified. The use of multiple isolation methods as well as the time lapse between sampling and processing are

  12. Uranium mass and neutron multiplication factor estimates from time-correlation coincidence counts

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Wenxiong [China Academy of Engineering Physics, Center for Strategic Studies, Beijing 100088 (China); Li, Jiansheng [China Academy of Engineering Physics, Institute of Nuclear Physics and Chemistry, Mianyang 621900 (China); Zhu, Jianyu [China Academy of Engineering Physics, Center for Strategic Studies, Beijing 100088 (China)

    2015-10-11

    Time-correlation coincidence counts of neutrons are an important means to measure attributes of nuclear material. The main deficiency in the analysis is that an attribute of an unknown component can only be assessed by comparing it with similar known components. There is a lack of a universal method of measurement suitable for the different attributes of the components. This paper presents a new method that uses universal relations to estimate the mass and neutron multiplication factor of any uranium component with known enrichment. Based on numerical simulations and analyses of 64 highly enriched uranium components with different thicknesses and average radii, the relations between mass, multiplication and coincidence spectral features have been obtained by linear regression analysis. To examine the validity of the method in estimating the mass of uranium components with different sizes, shapes, enrichment, and shielding, the features of time-correlation coincidence-count spectra for other objects with similar attributes are simulated. Most of the masses and multiplications for these objects could also be derived by the formulation. Experimental measurements of highly enriched uranium castings have also been used to verify the formulation. The results show that for a well-designed time-dependent coincidence-count measuring system of a uranium attribute, there are a set of relations dependent on the uranium enrichment by which the mass and multiplication of the measured uranium components of any shape and size can be estimated from the features of the source-detector coincidence-count spectrum.

  13. Multiple-Factor Based Sparse Urban Travel Time Prediction

    Directory of Open Access Journals (Sweden)

    Xinyan Zhu

    2018-02-01

    Full Text Available The prediction of travel time is challenging given the sparseness of real-time traffic data and the uncertainty of travel, because it is influenced by multiple factors on the congested urban road networks. In our paper, we propose a three-layer neural network from big probe vehicles data incorporating multi-factors to estimate travel time. The procedure includes the following three steps. First, we aggregate data according to the travel time of a single taxi traveling a target link on working days as traffic flows display similar traffic patterns over a weekly cycle. We then extract feature relationships between target and adjacent links at 30 min interval. About 224,830,178 records are extracted from probe vehicles. Second, we design a three-layer artificial neural network model. The number of neurons in input layer is eight, and the number of neurons in output layer is one. Finally, the trained neural network model is used for link travel time prediction. Different factors are included to examine their influence on the link travel time. Our model is verified using historical data from probe vehicles collected from May to July 2014 in Wuhan, China. The results show that we could obtain the link travel time prediction results using the designed artificial neural network model and detect the influence of different factors on link travel time.

  14. MITIE: Simultaneous RNA-Seq-based transcript identification and quantification in multiple samples.

    Science.gov (United States)

    Behr, Jonas; Kahles, André; Zhong, Yi; Sreedharan, Vipin T; Drewe, Philipp; Rätsch, Gunnar

    2013-10-15

    High-throughput sequencing of mRNA (RNA-Seq) has led to tremendous improvements in the detection of expressed genes and reconstruction of RNA transcripts. However, the extensive dynamic range of gene expression, technical limitations and biases, as well as the observed complexity of the transcriptional landscape, pose profound computational challenges for transcriptome reconstruction. We present the novel framework MITIE (Mixed Integer Transcript IdEntification) for simultaneous transcript reconstruction and quantification. We define a likelihood function based on the negative binomial distribution, use a regularization approach to select a few transcripts collectively explaining the observed read data and show how to find the optimal solution using Mixed Integer Programming. MITIE can (i) take advantage of known transcripts, (ii) reconstruct and quantify transcripts simultaneously in multiple samples, and (iii) resolve the location of multi-mapping reads. It is designed for genome- and assembly-based transcriptome reconstruction. We present an extensive study based on realistic simulated RNA-Seq data. When compared with state-of-the-art approaches, MITIE proves to be significantly more sensitive and overall more accurate. Moreover, MITIE yields substantial performance gains when used with multiple samples. We applied our system to 38 Drosophila melanogaster modENCODE RNA-Seq libraries and estimated the sensitivity of reconstructing omitted transcript annotations and the specificity with respect to annotated transcripts. Our results corroborate that a well-motivated objective paired with appropriate optimization techniques lead to significant improvements over the state-of-the-art in transcriptome reconstruction. MITIE is implemented in C++ and is available from http://bioweb.me/mitie under the GPL license.

  15. Use of the Godin leisure-time exercise questionnaire in multiple sclerosis research: a comprehensive narrative review.

    Science.gov (United States)

    Sikes, Elizabeth Morghen; Richardson, Emma V; Cederberg, Katie J; Sasaki, Jeffer E; Sandroff, Brian M; Motl, Robert W

    2018-01-17

    The Godin Leisure-Time Exercise Questionnaire has been a commonly applied measure of physical activity in research among persons with multiple sclerosis over the past decade. This paper provides a comprehensive description of its application and inclusion in research on physical activity in multiple sclerosis. This comprehensive, narrative review included papers that were published between 1985 and 2017, written in English, involved participants with multiple sclerosis as a primary population, measured physical activity, and cited one of the two original Godin papers. There is a broad scope of research that has included the Godin Leisure-Time Exercise Questionnaire in persons with multiple sclerosis. Overall, 8 papers evaluated its psychometric properties, 21 evaluated patterns of physical activity, 24 evaluated correlates or determinants of physical activity, 28 evaluated outcomes or consequences of physical activity, and 15 evaluated physical activity interventions. The Godin Leisure-Time Exercise Questionnaire is a valid self-report measure of physical activity in persons with multiple sclerosis, and further is an appropriate, simple, and effective tool for describing patterns of physical activity, examining correlates and outcomes of physical activity, and provides a sensitive outcome for measuring change in physical activity after an intervention. Implications for rehabilitation There is increasing interest in physical activity and its benefits in multiple sclerosis. The study of physical activity requires appropriate and standardized measures. The Godin Leisure-Time Exercise Questionnaire is a common self-report measure of physical activity for persons with multiple sclerosis. Godin Leisure-Time Exercise Questionnaire scores are reliable measures of physical activity in persons with multiple sclerosis. The Godin Leisure-Time Exercise Questionnaire further is an appropriate, simple, and effective tool for describing patterns of physical activity, examining

  16. Feasibility Study of Neutron Multiplicity Assay for a Heterogeneous Sludge Sample containing Na, Pu and other Impurities

    International Nuclear Information System (INIS)

    Nakamura, H.; Nakamichi, H.; Mukai, Y.; Yoshimoto, K.; Beddingfield, D.H.

    2010-01-01

    To reduce radioactivity of liquid waste generated at PCDF, a neutralization precipitation processes of radioactive nuclides by sodium hydroxide is used. We call the precipitate a 'sludge' after calcination. Pu mass in the sludge is normally determined by sampling and DA within the required uncertainty on DIQ. Annual yield of the mass is small but it accumulates and reaches to a few kilograms, so it is declared as retained waste and verified at PIV. A HM-5-based verification is applied for sludge verification. The sludge contains many chemical components. For example, Pu (-10wt%), U, Am, SUS components, halogens, NaNO 3 (main component), residual NaOH, and moisture. They are mixed together as an impure heterogeneous sludge sample. As a result, there is a large uncertainty in the sampling and DA that is currently used at PCDF. In order to improve the material accounting, we performed a feasibility study using neutron multiplicity assay for impure sludge samples. We have measured selected sludge samples using a multiplicity counter which is called FCAS (Fast Carton Assay System) which was designed by JAEA and Canberra. The PCDF sludge materials fall into the category of 'difficult to measure' because of the high levels of impurities, high alpha values and somewhat small Pu mass. For the sludge measurements, it was confirmed that good consistency between Pu mass in a pure sludge standard (PuO 2 -Na 2 U 2 O 7 , alpha=7) and the DA could be obtained. For unknown samples, using 14-hour measurements, we could obtain quite low statistical uncertainty on Doubles (-1%) and Triples (-10%) count rate although the alpha value was extremely high (15-25) and FCAS efficiency was relatively low (40%) for typical multiplicity counters. Despite the detector efficiency challenges and the material challenges (high alpha, low Pu mass, heterogeneous matrix), we have been able to obtain assay results that greatly exceed the accountancy requirements for retained waste materials. We have

  17. In-Sample Confidence Bands and Out-of-Sample Forecast Bands for Time-Varying Parameters in Observation Driven Models

    NARCIS (Netherlands)

    Blasques, F.; Koopman, S.J.; Lasak, K.A.; Lucas, A.

    2016-01-01

    We study the performances of alternative methods for calculating in-sample confidence and out-of-sample forecast bands for time-varying parameters. The in-sample bands reflect parameter uncertainty, while the out-of-sample bands reflect not only parameter uncertainty, but also innovation

  18. Mixing Time, Inversion and Multiple Emulsion Formation in a Limonene and Water Pickering Emulsion

    Directory of Open Access Journals (Sweden)

    Laura Sawiak

    2018-05-01

    Full Text Available It has previously been demonstrated that particle-stabilized emulsions comprised of limonene, water and fumed silica particles exhibit complex emulsification behavior as a function of composition and the duration of the emulsification step. Most notably the system can invert from being oil-continuous to being water-continuous under prolonged mixing. Here we investigate this phenomenon experimentally for the regime where water is the majority liquid. We prepare samples using a range of different emulsification times and we examine the final properties in bulk and via confocal microscopy. We use the images to quantitatively track the sizes of droplets and clusters of particles. We find that a dense emulsion of water droplets forms initially which is transformed, in time, into a water-in-oil-in-water multiple emulsion with concomitant changes in droplet and cluster sizes. In parallel we carry out rheological studies of water-in-limonene emulsions using different concentrations of fumed silica particles. We unite our observations to propose a mechanism for inversion based on the changes in flow properties and the availability of particles during emulsification.

  19. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

  20. Imaging a Large Sample with Selective Plane Illumination Microscopy Based on Multiple Fluorescent Microsphere Tracking

    Science.gov (United States)

    Ryu, Inkeon; Kim, Daekeun

    2018-04-01

    A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.

  1. Velocity landscape correlation resolves multiple flowing protein populations from fluorescence image time series.

    Science.gov (United States)

    Pandžić, Elvis; Abu-Arish, Asmahan; Whan, Renee M; Hanrahan, John W; Wiseman, Paul W

    2018-02-16

    Molecular, vesicular and organellar flows are of fundamental importance for the delivery of nutrients and essential components used in cellular functions such as motility and division. With recent advances in fluorescence/super-resolution microscopy modalities we can resolve the movements of these objects at higher spatio-temporal resolutions and with better sensitivity. Previously, spatio-temporal image correlation spectroscopy has been applied to map molecular flows by correlation analysis of fluorescence fluctuations in image series. However, an underlying assumption of this approach is that the sampled time windows contain one dominant flowing component. Although this was true for most of the cases analyzed earlier, in some situations two or more different flowing populations can be present in the same spatio-temporal window. We introduce an approach, termed velocity landscape correlation (VLC), which detects and extracts multiple flow components present in a sampled image region via an extension of the correlation analysis of fluorescence intensity fluctuations. First we demonstrate theoretically how this approach works, test the performance of the method with a range of computer simulated image series with varying flow dynamics. Finally we apply VLC to study variable fluxing of STIM1 proteins on microtubules connected to the plasma membrane of Cystic Fibrosis Bronchial Epithelial (CFBE) cells. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Simultaneous small-sample comparisons in longitudinal or multi-endpoint trials using multiple marginal models

    DEFF Research Database (Denmark)

    Pallmann, Philip; Ritz, Christian; Hothorn, Ludwig A

    2018-01-01

    , however only asymptotically. In this paper, we show how to make the approach also applicable to small-sample data problems. Specifically, we discuss the computation of adjusted P values and simultaneous confidence bounds for comparisons of randomised treatment groups as well as for levels......Simultaneous inference in longitudinal, repeated-measures, and multi-endpoint designs can be onerous, especially when trying to find a reasonable joint model from which the interesting effects and covariances are estimated. A novel statistical approach known as multiple marginal models greatly...... simplifies the modelling process: the core idea is to "marginalise" the problem and fit multiple small models to different portions of the data, and then estimate the overall covariance matrix in a subsequent, separate step. Using these estimates guarantees strong control of the family-wise error rate...

  3. Adaptive modified function projective synchronization of multiple time-delayed chaotic Rossler system

    International Nuclear Information System (INIS)

    Sudheer, K. Sebastian; Sabir, M.

    2011-01-01

    In this Letter we consider modified function projective synchronization of unidirectionally coupled multiple time-delayed Rossler chaotic systems using adaptive controls. Recently, delay differential equations have attracted much attention in the field of nonlinear dynamics. The high complexity of the multiple time-delayed systems can provide a new architecture for enhancing message security in chaos based encryption systems. Adaptive control can be used for synchronization when the parameters of the system are unknown. Based on Lyapunov stability theory, the adaptive control law and the parameter update law are derived to make the state of two chaotic systems are function projective synchronized. Numerical simulations are presented to demonstrate the effectiveness of the proposed adaptive controllers.

  4. Noise Reduction Effect of Multiple-Sampling-Based Signal-Readout Circuits for Ultra-Low Noise CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Shoji Kawahito

    2016-11-01

    Full Text Available This paper discusses the noise reduction effect of multiple-sampling-based signal readout circuits for implementing ultra-low-noise image sensors. The correlated multiple sampling (CMS technique has recently become an important technology for high-gain column readout circuits in low-noise CMOS image sensors (CISs. This paper reveals how the column CMS circuits, together with a pixel having a high-conversion-gain charge detector and low-noise transistor, realizes deep sub-electron read noise levels based on the analysis of noise components in the signal readout chain from a pixel to the column analog-to-digital converter (ADC. The noise measurement results of experimental CISs are compared with the noise analysis and the effect of noise reduction to the sampling number is discussed at the deep sub-electron level. Images taken with three CMS gains of two, 16, and 128 show distinct advantage of image contrast for the gain of 128 (noise(median: 0.29 e−rms when compared with the CMS gain of two (2.4 e−rms, or 16 (1.1 e−rms.

  5. Eigenvalue sensitivity of sampled time systems operating in closed loop

    Science.gov (United States)

    Bernal, Dionisio

    2018-05-01

    The use of feedback to create closed-loop eigenstructures with high sensitivity has received some attention in the Structural Health Monitoring field. Although practical implementation is necessarily digital, and thus in sampled time, work thus far has center on the continuous time framework, both in design and in checking performance. It is shown in this paper that the performance in discrete time, at typical sampling rates, can differ notably from that anticipated in the continuous time formulation and that discrepancies can be particularly large on the real part of the eigenvalue sensitivities; a consequence being important error on the (linear estimate) of the level of damage at which closed-loop stability is lost. As one anticipates, explicit consideration of the sampling rate poses no special difficulties in the closed-loop eigenstructure design and the relevant expressions are developed in the paper, including a formula for the efficient evaluation of the derivative of the matrix exponential based on the theory of complex perturbations. The paper presents an easily reproduced numerical example showing the level of error that can result when the discrete time implementation of the controller is not considered.

  6. Weighted statistical parameters for irregularly sampled time series

    Science.gov (United States)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  7. Degradation of hydrocarbons in soil samples analyzed within accepted analytical holding times

    International Nuclear Information System (INIS)

    Jackson, J.; Thomey, N.; Dietlein, L.F.

    1992-01-01

    Samples which are collected in conjunction with subsurface investigations at leaking petroleum storage tank sites and petroleum refineries are routinely analyzed for benzene, toluene, ethylbenzene, xylenes (BTEX), and total petroleum hydrocarbons (TPH). Water samples are preserved by the addition of hydrochloric acid and maintained at four degrees centigrade prior to analysis. This is done to prevent bacterial degradation of hydrocarbons. Chemical preservation is not presently performed on soil samples. Instead, the samples are cooled and maintained at four degrees centigrade. This study was done to measure the degree of degradation of hydrocarbons in soil samples which are analyzed within accepted holding times. Soil samples were collected and representative subsamples were prepared from the initial sample. Subsamples were analyzed in triplicate for BTEX and TPH throughout the length of the approved holding times to measure the extent of sample constituent degradation prior to analysis. Findings imply that for sandy soils, BTEX and TPH concentrations can be highly dependent upon the length of time which elapses between sample collection and analysis

  8. Non-Abelian Kubo formula and the multiple time-scale method

    International Nuclear Information System (INIS)

    Zhang, X.; Li, J.

    1996-01-01

    The non-Abelian Kubo formula is derived from the kinetic theory. That expression is compared with the one obtained using the eikonal for a Chern endash Simons theory. The multiple time-scale method is used to study the non-Abelian Kubo formula, and the damping rate for longitudinal color waves is computed. copyright 1996 Academic Press, Inc

  9. Column-Parallel Single Slope ADC with Digital Correlated Multiple Sampling for Low Noise CMOS Image Sensors

    NARCIS (Netherlands)

    Chen, Y.; Theuwissen, A.J.P.; Chae, Y.

    2011-01-01

    This paper presents a low noise CMOS image sensor (CIS) using 10/12 bit configurable column-parallel single slope ADCs (SS-ADCs) and digital correlated multiple sampling (CMS). The sensor used is a conventional 4T active pixel with a pinned-photodiode as photon detector. The test sensor was

  10. Hexamethylcyclopentadiene: time-resolved photoelectron spectroscopy and ab initio multiple spawning simulations

    DEFF Research Database (Denmark)

    Wolf, T. J. A.; Kuhlman, Thomas Scheby; Schalk, O.

    2014-01-01

    comparing time-resolved photoelectron spectroscopy (TRPES) with ab initio multiple spawning (AIMS) simulations on the MS-MR-CASPT2 level of theory. We disentangle the relationship between two phenomena that dominate the immediate molecular response upon light absorption: a spectrally dependent delay...

  11. A time warping approach to multiple sequence alignment.

    Science.gov (United States)

    Arribas-Gil, Ana; Matias, Catherine

    2017-04-25

    We propose an approach for multiple sequence alignment (MSA) derived from the dynamic time warping viewpoint and recent techniques of curve synchronization developed in the context of functional data analysis. Starting from pairwise alignments of all the sequences (viewed as paths in a certain space), we construct a median path that represents the MSA we are looking for. We establish a proof of concept that our method could be an interesting ingredient to include into refined MSA techniques. We present a simple synthetic experiment as well as the study of a benchmark dataset, together with comparisons with 2 widely used MSA softwares.

  12. A confirmatory holding time study for purgeable VOCs in water samples

    International Nuclear Information System (INIS)

    West, O.R.; Bayne, C.K.; Siegrist, R.L.; Holden, W.H.; Bottrell, D.W.

    1996-01-01

    Analyte stability during pre-analytical storage is essential to the accurate quantification contaminants in environmental samples. This is particularly true for volatile organic compounds (VOCS) which can easily volatilize and/or degrade during sample storage. Recognizing this, regulatory agencies require water samples be collected in vials without headspace and stored at 4 degrees C, and that analyses be conducted within 14 days, 2048 even if samples are acid-preserved. Since the selection of a 14-day holding time was largely arbitrary, the appropriateness of this requirement must be re-evaluated. The goal of the study described here was to provide regulatory agencies with the necessary data to extend the maximum holding time for properly preserved VOC water samples to 28 days

  13. Detection of bifurcations in noisy coupled systems from multiple time series

    International Nuclear Information System (INIS)

    Williamson, Mark S.; Lenton, Timothy M.

    2015-01-01

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system

  14. Detection of bifurcations in noisy coupled systems from multiple time series

    Science.gov (United States)

    Williamson, Mark S.; Lenton, Timothy M.

    2015-03-01

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.

  15. Detection of bifurcations in noisy coupled systems from multiple time series

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Mark S., E-mail: m.s.williamson@exeter.ac.uk; Lenton, Timothy M. [Earth System Science Group, College of Life and Environmental Sciences, University of Exeter, Laver Building, North Park Road, Exeter EX4 4QE (United Kingdom)

    2015-03-15

    We generalize a method of detecting an approaching bifurcation in a time series of a noisy system from the special case of one dynamical variable to multiple dynamical variables. For a system described by a stochastic differential equation consisting of an autonomous deterministic part with one dynamical variable and an additive white noise term, small perturbations away from the system's fixed point will decay slower the closer the system is to a bifurcation. This phenomenon is known as critical slowing down and all such systems exhibit this decay-type behaviour. However, when the deterministic part has multiple coupled dynamical variables, the possible dynamics can be much richer, exhibiting oscillatory and chaotic behaviour. In our generalization to the multi-variable case, we find additional indicators to decay rate, such as frequency of oscillation. In the case of approaching a homoclinic bifurcation, there is no change in decay rate but there is a decrease in frequency of oscillations. The expanded method therefore adds extra tools to help detect and classify approaching bifurcations given multiple time series, where the underlying dynamics are not fully known. Our generalisation also allows bifurcation detection to be applied spatially if one treats each spatial location as a new dynamical variable. One may then determine the unstable spatial mode(s). This is also something that has not been possible with the single variable method. The method is applicable to any set of time series regardless of its origin, but may be particularly useful when anticipating abrupt changes in the multi-dimensional climate system.

  16. Vehicle Routing Problem with Backhaul, Multiple Trips and Time Window

    Directory of Open Access Journals (Sweden)

    Johan Oscar Ong

    2011-01-01

    Full Text Available Transportation planning is one of the important components to increase efficiency and effectiveness in the supply chain system. Good planning will give a saving in total cost of the supply chain. This paper develops the new VRP variants’, VRP with backhauls, multiple trips, and time window (VRPBMTTW along with its problem solving techniques by using Ant Colony Optimization (ACO and Sequential Insertion as initial solution algorithm. ACO is modified by adding the decoding process in order to determine the number of vehicles, total duration time, and range of duration time regardless of checking capacity constraint and time window. This algorithm is tested by using set of random data and verified as well as analyzed its parameter changing’s. The computational results for hypothetical data with 50% backhaul and mix time windows are reported.

  17. Tightness of M-estimators for multiple linear regression in time series

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Bent

    We show tightness of a general M-estimator for multiple linear regression in time series. The positive criterion function for the M-estimator is assumed lower semi-continuous and sufficiently large for large argument: Particular cases are the Huber-skip and quantile regression. Tightness requires...

  18. Communication: Multiple atomistic force fields in a single enhanced sampling simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hoang Viet, Man [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695-8202 (United States); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS, Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France); Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS, Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France)

    2015-07-14

    The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.

  19. SDG multiple fault diagnosis by real-time inverse inference

    International Nuclear Information System (INIS)

    Zhang Zhaoqian; Wu Chongguang; Zhang Beike; Xia Tao; Li Anfeng

    2005-01-01

    In the past 20 years, one of the qualitative simulation technologies, signed directed graph (SDG) has been widely applied in the field of chemical fault diagnosis. However, the assumption of single fault origin was usually used by many former researchers. As a result, this will lead to the problem of combinatorial explosion and has limited SDG to the realistic application on the real process. This is mainly because that most of the former researchers used forward inference engine in the commercial expert system software to carry out the inverse diagnosis inference on the SDG model which violates the internal principle of diagnosis mechanism. In this paper, we present a new SDG multiple faults diagnosis method by real-time inverse inference. This is a method of multiple faults diagnosis from the genuine significance and the inference engine use inverse mechanism. At last, we give an example of 65t/h furnace diagnosis system to demonstrate its applicability and efficiency

  20. SDG multiple fault diagnosis by real-time inverse inference

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Zhaoqian; Wu Chongguang; Zhang Beike; Xia Tao; Li Anfeng

    2005-02-01

    In the past 20 years, one of the qualitative simulation technologies, signed directed graph (SDG) has been widely applied in the field of chemical fault diagnosis. However, the assumption of single fault origin was usually used by many former researchers. As a result, this will lead to the problem of combinatorial explosion and has limited SDG to the realistic application on the real process. This is mainly because that most of the former researchers used forward inference engine in the commercial expert system software to carry out the inverse diagnosis inference on the SDG model which violates the internal principle of diagnosis mechanism. In this paper, we present a new SDG multiple faults diagnosis method by real-time inverse inference. This is a method of multiple faults diagnosis from the genuine significance and the inference engine use inverse mechanism. At last, we give an example of 65t/h furnace diagnosis system to demonstrate its applicability and efficiency.

  1. The significance of sampling time in therapeutic drug monitoring of clozapine

    DEFF Research Database (Denmark)

    Jakobsen, M I; Larsen, J R; Svensson, C K

    2017-01-01

    OBJECTIVE: Therapeutic drug monitoring (TDM) of clozapine is standardized to 12-h postdose samplings. In clinical settings, sampling time often deviates from this time point, although the importance of the deviation is unknown. To this end, serum concentrations (s-) of clozapine and its metabolite...... N-desmethyl-clozapine (norclozapine) were measured at 12 ± 1 and 2 h postdose. METHOD: Forty-six patients with a diagnosis of schizophrenia, and on stable clozapine treatment, were enrolled for hourly, venous blood sampling at 10-14 h postdose. RESULTS: Minor changes in median percentage values were...

  2. Vipie: web pipeline for parallel characterization of viral populations from multiple NGS samples.

    Science.gov (United States)

    Lin, Jake; Kramna, Lenka; Autio, Reija; Hyöty, Heikki; Nykter, Matti; Cinek, Ondrej

    2017-05-15

    Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped ('dark matter') reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source.

  3. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  4. Robustness analysis of uncertain dynamical neural networks with multiple time delays.

    Science.gov (United States)

    Senan, Sibel

    2015-10-01

    This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Real-time Multiple Abnormality Detection in Video Data

    DEFF Research Database (Denmark)

    Have, Simon Hartmann; Ren, Huamin; Moeslund, Thomas B.

    2013-01-01

    Automatic abnormality detection in video sequences has recently gained an increasing attention within the research community. Although progress has been seen, there are still some limitations in current research. While most systems are designed at detecting specific abnormality, others which...... are capable of detecting more than two types of abnormalities rely on heavy computation. Therefore, we provide a framework for detecting abnormalities in video surveillance by using multiple features and cascade classifiers, yet achieve above real-time processing speed. Experimental results on two datasets...... show that the proposed framework can reliably detect abnormalities in the video sequence, outperforming the current state-of-the-art methods....

  6. Solid-state framing camera with multiple time frames

    Energy Technology Data Exchange (ETDEWEB)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.; Vernon, S. P.; Hsing, W. W.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  7. New sampling electronics using CCD for DIOGENE: a high multiplicity, 4 π detector for relativistic heavy ions

    International Nuclear Information System (INIS)

    Babinet, R.P.

    1987-01-01

    DIOGENE is a small time projection chamber which has been developed to study central collisions of relativistic heavy ions. The maximum multiplicity (up to 40 charged particles) that can be accepted by this detector is limited by the present electronics. In view of the heavier mass ions that should become readily available at the Saturne national facility (France), a new sampling electronics has been tested. In the first part of this talk they will present a brief description of the actual detector, insisting on the performances that have been effectively obtained with α-particles and Neon beams. The motivation for and characteristics of a renewed electronic set-up should thus appear more clearly. The second part of the talk is devoted to results of the tests that have been performed using charged couple devices. They will finally conclude on the future perspectives that have been opened by these developments

  8. Sampling methods for rumen microbial counts by Real-Time PCR techniques

    Directory of Open Access Journals (Sweden)

    S. Puppo

    2010-02-01

    Full Text Available Fresh rumen samples were withdrawn from 4 cannulated buffalo females fed a fibrous diets in order to quantify bacteria concentration in the rumen by Real-Time PCR techniques. To obtain DNA of a good quality from whole rumen fluid, eight (M1-M8 different pre-filtration methods (cheese cloths, glass-fibre and nylon filter in combination with various centrifugation speeds (1000, 5000 and 14,000 rpm were tested. Genomic DNA extraction was performed either on fresh or frozen samples (-20°C. The quantitative bacteria analysis was realized according to Real-Time PCR procedure for Butyrivibrio fibrisolvens reported in literature. M5 resulted the best sampling procedure allowing to obtain a suitable genomic DNA. No differences were revealed between fresh and frozen samples.

  9. The effect of albedo neutrons on the neutron multiplication of small plutonium oxide samples in a PNCC chamber

    CERN Document Server

    Bourva, L C A; Weaver, D R

    2002-01-01

    This paper describes how to evaluate the effect of neutrons reflected from parts of a passive neutron coincidence chamber on the neutron leakage self-multiplication, M sub L , of a fissile sample. It is shown that albedo neutrons contribute, in the case of small plutonium bearing samples, to a significant part of M sub L , and that their effect has to be taken into account in the relationship between the measured coincidence count rates and the sup 2 sup 4 sup 0 Pu effective mass of the sample. A simple one-interaction model has been used to write the balance of neutron gains and losses in the material when exposed to the re-entrant neutron flux. The energy and intensity profiles of the re-entrant flux have been parameterised using Monte Carlo MCNP sup T sup M calculations. This technique has been implemented for the On Site Laboratory neutron/gamma counter within the existing MEPL 1.0 code for the determination of the neutron leakage self-multiplication. Benchmark tests of the resulting MEPL 2.0 code with MC...

  10. Making MUSIC: A multiple sampling ionization chamber

    International Nuclear Information System (INIS)

    Shumard, B.; Henderson, D.J.; Rehm, K.E.; Tang, X.D.

    2007-01-01

    A multiple sampling ionization chamber (MUSIC) was developed for use in conjunction with the Atlas scattering chamber (ATSCAT). This chamber was developed to study the (α, p) reaction in stable and radioactive beams. The gas filled ionization chamber is used as a target and detector for both particles in the outgoing channel (p + beam particles for elastic scattering or p + residual nucleus for (α, p) reactions). The MUSIC detector is followed by a Si array to provide a trigger for anode events. The anode events are gated by a gating grid so that only (α, p) reactions where the proton reaches the Si detector result in an anode event. The MUSIC detector is a segmented ionization chamber. The active length of the chamber is 11.95 in. and is divided into 16 equal anode segments (3.5 in. x 0.70 in. with 0.3 in. spacing between pads). The dead area of the chamber was reduced by the addition of a Delrin snout that extends 0.875 in. into the chamber from the front face, to which a mylar window is affixed. 0.5 in. above the anode is a Frisch grid that is held at ground potential. 0.5 in. above the Frisch grid is a gating grid. The gating grid functions as a drift electron barrier, effectively halting the gathering of signals. Setting two sets of alternating wires at differing potentials creates a lateral electric field which traps the drift electrons, stopping the collection of anode signals. The chamber also has a reinforced mylar exit window separating the Si array from the target gas. This allows protons from the (α, p) reaction to be detected. The detection of these protons opens the gating grid to allow the drift electrons released from the ionizing gas during the (α, p) reaction to reach the anode segment below the reaction

  11. Making MUSIC: A multiple sampling ionization chamber

    Science.gov (United States)

    Shumard, B.; Henderson, D. J.; Rehm, K. E.; Tang, X. D.

    2007-08-01

    A multiple sampling ionization chamber (MUSIC) was developed for use in conjunction with the Atlas scattering chamber (ATSCAT). This chamber was developed to study the (α, p) reaction in stable and radioactive beams. The gas filled ionization chamber is used as a target and detector for both particles in the outgoing channel (p + beam particles for elastic scattering or p + residual nucleus for (α, p) reactions). The MUSIC detector is followed by a Si array to provide a trigger for anode events. The anode events are gated by a gating grid so that only (α, p) reactions where the proton reaches the Si detector result in an anode event. The MUSIC detector is a segmented ionization chamber. The active length of the chamber is 11.95 in. and is divided into 16 equal anode segments (3.5 in. × 0.70 in. with 0.3 in. spacing between pads). The dead area of the chamber was reduced by the addition of a Delrin snout that extends 0.875 in. into the chamber from the front face, to which a mylar window is affixed. 0.5 in. above the anode is a Frisch grid that is held at ground potential. 0.5 in. above the Frisch grid is a gating grid. The gating grid functions as a drift electron barrier, effectively halting the gathering of signals. Setting two sets of alternating wires at differing potentials creates a lateral electric field which traps the drift electrons, stopping the collection of anode signals. The chamber also has a reinforced mylar exit window separating the Si array from the target gas. This allows protons from the (α, p) reaction to be detected. The detection of these protons opens the gating grid to allow the drift electrons released from the ionizing gas during the (α, p) reaction to reach the anode segment below the reaction.

  12. Making MUSIC: A multiple sampling ionization chamber

    Energy Technology Data Exchange (ETDEWEB)

    Shumard, B. [Argonne National Laboratory, Building 203 H-113, Argonne, IL 60439 (United States)]. E-mail: shumard@phy.anl.gov; Henderson, D.J. [Argonne National Laboratory, Building 203 H-113, Argonne, IL 60439 (United States); Rehm, K.E. [Argonne National Laboratory, Building 203 H-113, Argonne, IL 60439 (United States); Tang, X.D. [Argonne National Laboratory, Building 203 H-113, Argonne, IL 60439 (United States)

    2007-08-15

    A multiple sampling ionization chamber (MUSIC) was developed for use in conjunction with the Atlas scattering chamber (ATSCAT). This chamber was developed to study the ({alpha}, p) reaction in stable and radioactive beams. The gas filled ionization chamber is used as a target and detector for both particles in the outgoing channel (p + beam particles for elastic scattering or p + residual nucleus for ({alpha}, p) reactions). The MUSIC detector is followed by a Si array to provide a trigger for anode events. The anode events are gated by a gating grid so that only ({alpha}, p) reactions where the proton reaches the Si detector result in an anode event. The MUSIC detector is a segmented ionization chamber. The active length of the chamber is 11.95 in. and is divided into 16 equal anode segments (3.5 in. x 0.70 in. with 0.3 in. spacing between pads). The dead area of the chamber was reduced by the addition of a Delrin snout that extends 0.875 in. into the chamber from the front face, to which a mylar window is affixed. 0.5 in. above the anode is a Frisch grid that is held at ground potential. 0.5 in. above the Frisch grid is a gating grid. The gating grid functions as a drift electron barrier, effectively halting the gathering of signals. Setting two sets of alternating wires at differing potentials creates a lateral electric field which traps the drift electrons, stopping the collection of anode signals. The chamber also has a reinforced mylar exit window separating the Si array from the target gas. This allows protons from the ({alpha}, p) reaction to be detected. The detection of these protons opens the gating grid to allow the drift electrons released from the ionizing gas during the ({alpha}, p) reaction to reach the anode segment below the reaction.

  13. Generation of synthetic time histories compatible with multiple-damping design response spectra

    International Nuclear Information System (INIS)

    Lilhanand, K.; Tseng, W.S.

    1987-01-01

    Seismic design of nuclear power plants as currently practiced requires time history analyses be performed to generate floor response spectra for seismic qualification of piping, equipment, and components. Since design response spectra are normally prescribed in the form of smooth spectra, the generation of synthetic time histories whose response spectra closely match the ''target'' design spectra of multiple damping values, is often required for the seismic time history analysis purpose. Various methods of generation of synthetic time histories compatible with target response spectra have been proposed in the literature. Since the mathematical problem of determining a time history from a given set of response spectral values is not unique, an exact solution is not possible, and all the proposed methods resort to some forms of approximate solutions. In this paper, a new iteration scheme, is described which effectively removes the difficulties encountered by the existing methods. This new iteration scheme can not only improve the accuracy of spectrum matching for a single-damping target spectrum, but also automate the spectrum matching for multiple-damping target spectra. The applicability and limitations as well as the method adopted to improve the numerical stability of this new iteration scheme are presented. The effectiveness of this new iteration scheme is illustrated by two example applications

  14. A high-pressure thermal gradient block for investigating microbial activity in multiple deep-sea samples

    DEFF Research Database (Denmark)

    Kallmeyer, J.; Ferdelman, TG; Jansen, KH

    2003-01-01

    Details about the construction and use of a high-pressure thermal gradient block for the simultaneous incubation of multiple samples are presented. Most parts used are moderately priced off-the-shelf components that easily obtainable. In order to keep the pressure independent of thermal expansion....... Sulfate reduction rates increase with increasing pressure and show maximum values at pressures higher than in situ. (C) 2003 Elsevier Science B.V. All rights reserved....

  15. Occurrence of multiple mental health or substance use outcomes among bisexuals: a respondent-driven sampling study

    Directory of Open Access Journals (Sweden)

    Greta R. Bauer

    2016-06-01

    Full Text Available Abstract Background Bisexual populations have higher prevalence of depression, anxiety, suicidality and substance use than heterosexuals, and often than gay men or lesbians. The co-occurrence of multiple outcomes has rarely been studied. Methods Data were collected from 405 bisexuals using respondent-driven sampling. Weighted analyses were conducted for 387 with outcome data. Multiple outcomes were defined as 3 or more of: depression, anxiety, suicide ideation, problematic alcohol use, or polysubstance use. Results Among bisexuals, 19.0 % had multiple outcomes. We did not find variation in raw frequency of multiple outcomes across sociodemographic variables (e.g. gender, age. After adjustment, gender and sexual orientation identity were associated, with transgender women and those identifying as bisexual only more likely to have multiple outcomes. Social equity factors had a strong impact in both crude and adjusted analysis: controlling for other factors, high mental health/substance use burden was associated with greater discrimination (prevalence risk ratio (PRR = 5.71; 95 % CI: 2.08, 15.63 and lower education (PRR = 2.41; 95 % CI: 1.06, 5.49, while higher income-to-needs ratio was protective (PRR = 0.44; 0.20, 1.00. Conclusions Mental health and substance use outcomes with high prevalence among bisexuals frequently co-occurred. We find some support for the theory that these multiple outcomes represent a syndemic, defined as co-occurring and mutually reinforcing adverse outcomes driven by social inequity.

  16. A feedback control model for network flow with multiple pure time delays

    Science.gov (United States)

    Press, J.

    1972-01-01

    A control model describing a network flow hindered by multiple pure time (or transport) delays is formulated. Feedbacks connect each desired output with a single control sector situated at the origin. The dynamic formulation invokes the use of differential difference equations. This causes the characteristic equation of the model to consist of transcendental functions instead of a common algebraic polynomial. A general graphical criterion is developed to evaluate the stability of such a problem. A digital computer simulation confirms the validity of such criterion. An optimal decision making process with multiple delays is presented.

  17. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  18. Real-time Java simulations of multiple interference dielectric filters

    Science.gov (United States)

    Kireev, Alexandre N.; Martin, Olivier J. F.

    2008-12-01

    An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk. Program summaryProgram title: Transmittance Catalogue identifier: AEBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5778 No. of bytes in distributed program, including test data, etc.: 90 474 Distribution format: tar.gz Programming language: Java Computer: Developed on PC-Pentium platform Operating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OS RAM: Variable Classification: 18 Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter. Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool. Running time: Real-time simulations

  19. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  20. Multiple time scales in modeling the incidence of infections acquired in intensive care units

    Directory of Open Access Journals (Sweden)

    Martin Wolkewitz

    2016-09-01

    Full Text Available Abstract Background When patients are admitted to an intensive care unit (ICU their risk of getting an infection will be highly depend on the length of stay at-risk in the ICU. In addition, risk of infection is likely to vary over calendar time as a result of fluctuations in the prevalence of the pathogen on the ward. Hence risk of infection is expected to depend on two time scales (time in ICU and calendar time as well as competing events (discharge or death and their spatial location. The purpose of this paper is to develop and apply appropriate statistical models for the risk of ICU-acquired infection accounting for multiple time scales, competing risks and the spatial clustering of the data. Methods A multi-center data base from a Spanish surveillance network was used to study the occurrence of an infection due to Methicillin-resistant Staphylococcus aureus (MRSA. The analysis included 84,843 patient admissions between January 2006 and December 2011 from 81 ICUs. Stratified Cox models were used to study multiple time scales while accounting for spatial clustering of the data (patients within ICUs and for death or discharge as competing events for MRSA infection. Results Both time scales, time in ICU and calendar time, are highly associated with the MRSA hazard rate and cumulative risk. When using only one basic time scale, the interpretation and magnitude of several patient-individual risk factors differed. Risk factors concerning the severity of illness were more pronounced when using only calendar time. These differences disappeared when using both time scales simultaneously. Conclusions The time-dependent dynamics of infections is complex and should be studied with models allowing for multiple time scales. For patient individual risk-factors we recommend stratified Cox regression models for competing events with ICU time as the basic time scale and calendar time as a covariate. The inclusion of calendar time and stratification by ICU

  1. Neural-adaptive control of single-master-multiple-slaves teleoperation for coordinated multiple mobile manipulators with time-varying communication delays and input uncertainties.

    Science.gov (United States)

    Li, Zhijun; Su, Chun-Yi

    2013-09-01

    In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.

  2. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  3. A personal tourism navigation system to support traveling multiple destinations with time restrictions

    OpenAIRE

    Maruyama, Atsushi; Shibata, Naoki; Murata, Yoshihiro; Yasumoto, Keiichi; Ito, Minoru

    2004-01-01

    We propose a personal navigation system (called PNS) which navigates a tourist through multiple destinations efficiently. In our PNS, a tourist can specify multiple destinations with desired arrival/stay time and preference degree. The system calculates the route including part of the destinations satisfying tourist's requirements and navigates him/her. For the above route search problem, we have developed an efficient route search algorithm using a genetic algorithm. We have designed and imp...

  4. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  5. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. First passage times for multiple particles with reversible target-binding kinetics

    Science.gov (United States)

    Grebenkov, Denis S.

    2017-10-01

    We investigate the first passage problem for multiple particles that diffuse towards a target, partially adsorb there, and then desorb after a finite exponentially distributed residence time. We search for the first time when m particles undergoing such reversible target-binding kinetics are found simultaneously on the target that may trigger an irreversible chemical reaction or a biophysical event. Even if the particles are independent, the finite residence time on the target yields an intricate temporal coupling between particles. We compute analytically the mean first passage time (MFPT) for two independent particles by mapping the original problem to higher-dimensional surface-mediated diffusion and solving the coupled partial differential equations. The respective effects of the adsorption and desorption rates on the MFPT are revealed and discussed.

  7. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  8. Time-correlated single-photon counting study of multiple photoluminescence lifetime components of silicon nanoclusters

    Energy Technology Data Exchange (ETDEWEB)

    Diamare, D., E-mail: d.diamare@ee.ucl.ac.uk [Department of Electronic and Electrical Engineering, University College London, Torrington Place, London, WC1E 7JE (United Kingdom); Wojdak, M. [Department of Electronic and Electrical Engineering, University College London, Torrington Place, London, WC1E 7JE (United Kingdom); Lettieri, S. [Institute for Superconductors and Innovative Materials, National Council of Research (CNR-SPIN), Via Cintia 80126, Naples (Italy); Department of Physical Sciences, University of Naples “Federico II”, Via Cintia 80126, Naples (Italy); Kenyon, A.J. [Department of Electronic and Electrical Engineering, University College London, Torrington Place, London, WC1E 7JE (United Kingdom)

    2013-04-15

    We report time-resolved photoluminescence measurements of thin films of silica containing silicon nanoclusters (Si NCs), produced by PECVD and annealed at temperatures between 700 °C and 1150 °C. While the near infrared emission of Si NCs has long been studied, visible light emission has only recently attracted interest due to its very short decay times and its recently-reported redshift with decreasing NCs size. We analyse the PL decay dynamics in the range 450–700 nm with picosecond time resolution using Time Correlated Single Photon Counting. In the resultant multi-exponential decays two dominant components can clearly be distinguished: a very short component, in the range of hundreds of picoseconds, and a nanosecond component. In this wavelength range we do not detect the microsecond component generally associated with excitonic recombination. We associate the nanosecond component to defect relaxation: it decreases in intensity in the sample annealed at higher temperature, suggesting that the contribution from defects decreases with increasing temperature. The origin of the very fast PL component (ps time region) is also discussed. We show that it is consistent with the Auger recombination times of multiple excitons. Further work needs to be done in order to assess the contribution of the Auger-controlled recombinations to the defect-assisted mechanism of photoluminescence. -- Highlights: ► We report time-resolved PL measurements of Si-Ncs embedded in SiO{sub 2} matrix. ► Net decrease of PL with increasing the annealing temperature has been observed. ► Lifetime distribution analysis revealed a multiexponential decay with ns and ps components. ► Ps components are consistent with the lifetime range of the Auger recombination times. ► No evidence for a fast direct transition at the Brillouin zone centre.

  9. Electromagnetic velocity gauge: use of multiple gauges, time response, and flow perturbations

    International Nuclear Information System (INIS)

    Erickson, L.M.; Johnson, C.B.; Parker, N.L.; Vantine, H.C.; Weingart, R.C.; Lee, R.S.

    1981-01-01

    We have developed an in-situ electromagnetic velocity (EMV) gauge system for use in multiple-gauge studies of initiating and detonating explosives. We have also investigated the risetime of the gauge and the manner in which it perturbs a reactive flow. We report on the special precautions that are necessary in multiple gauge experiments to reduce lead spreading, simplify target fabrication problems and minimize cross talk through the conducting explosive. Agreement between measured stress records and calculations from multiple velocity gauge data give us confidence that our velocity gauges are recording properly. We have used laser velocity interferometry to measure the gauge risetime in polymethyl methacrylate (PMMA). To resolve the difference in the two methods, we have examined hydrodynamic and material rate effects. In addition, we considered the effects of shock tilt, electronic response and magntic diffusion on the gauge's response time

  10. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    Science.gov (United States)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  11. Multiple imputation for multivariate data with missing and below-threshold measurements: time-series concentrations of pollutants in the Arctic.

    Science.gov (United States)

    Hopke, P K; Liu, C; Rubin, D B

    2001-03-01

    Many chemical and environmental data sets are complicated by the existence of fully missing values or censored values known to lie below detection thresholds. For example, week-long samples of airborne particulate matter were obtained at Alert, NWT, Canada, between 1980 and 1991, where some of the concentrations of 24 particulate constituents were coarsened in the sense of being either fully missing or below detection limits. To facilitate scientific analysis, it is appealing to create complete data by filling in missing values so that standard complete-data methods can be applied. We briefly review commonly used strategies for handling missing values and focus on the multiple-imputation approach, which generally leads to valid inferences when faced with missing data. Three statistical models are developed for multiply imputing the missing values of airborne particulate matter. We expect that these models are useful for creating multiple imputations in a variety of incomplete multivariate time series data sets.

  12. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Time's up. descriptive epidemiology of multi-morbidity and time spent on health related activity by older Australians: a time use survey.

    Directory of Open Access Journals (Sweden)

    Tanisha Jowsey

    Full Text Available Most Western health systems remain single illness orientated despite the growing prevalence of multi-morbidity. Identifying how much time people with multiple chronic conditions spend managing their health will help policy makers and health service providers make decisions about areas of patient need for support. This article presents findings from an Australian study concerning the time spent on health related activity by older adults (aged 50 years and over, most of whom had multiple chronic conditions. A recall questionnaire was developed, piloted, and adjusted. Sampling was undertaken through three bodies; the Lung Foundation Australia (COPD sub-sample, National Diabetes Services Scheme (Diabetes sub-sample and National Seniors Australia (Seniors sub-sample. Questionnaires were mailed out during 2011 to 10,600 older adults living in Australia. 2540 survey responses were received and analysed. Descriptive analyses were completed to obtain median values for the hours spent on each activity per month. The mean number of chronic conditions was 3.7 in the COPD sub-sample, 3.4 in the Diabetes sub-sample and 2.0 in the NSA sub-sample. The study identified a clear trend of increased time use associated with increased number of chronic conditions. Median monthly time use was 5-16 hours per month overall for our three sub-samples. For respondents in the top decile with five or more chronic conditions the median time use was equivalent to two to three hours per day, and if exercise is included in the calculations, respondents spent from between five and eight hours per day: an amount similar to full-time work. Multi-morbidity imposes considerable time burdens on patients. Ageing is associated with increasing rates of multi-morbidity. Many older adults are facing high demands on their time to manage their health in the face of decreasing energy and mobility. Their time use must be considered in health service delivery and health system reform.

  14. Method for Hot Real-Time Sampling of Pyrolysis Vapors

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.

  15. Effect of immersion time on in vitro multiplication of Bambusa vulgaris Schrader ex Wendland in RITA® TIS

    Directory of Open Access Journals (Sweden)

    Mallelyn González González

    2013-01-01

    Full Text Available The TIS provides solutions to the constraints that affecting in vitro propagation of bamboos and increases the quality of the plants in vitro propagated and survival of these in greenhouse conditions and field. This study aimed to determine the effect of immersion time on the multiplication of B. vulgaris shoot grown in TIS (RITA. Morphological, physiological and biochemical variables such as the number of shoots per plant, length of shoots, number of leaves per shoot, water contents, and lignin phenols were analyzed. It was demonstrated that the immersion time influenced the in vitro multiplication of B. vulgaris. The explants treated with the immersion time of a minute developed a greater number of shoots (5. These shoots showed dark green coloration, 92.1% water and 13% lignin. However, the increase of immersion time to three minutes caused increase in the water content of shoots and decreased lignin content, which affected their morphological response and multiplication in the TIS (RITA. Analysis of morphological, physiological and biochemical variables, allowed defining one minute is the optimum immersion time for shoot multiplication of B. vulgaris in temporary immersion systems (RITA. The method of in vitro propagation of B. vulgaris described offers the advantage of using liquid culture media and automated systems. Key words: bamboo, in vitro multiplication, morphological variables, temporal immersion systems

  16. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    Science.gov (United States)

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Multiple-Symbol Decision-Feedback Space-Time Differential Decoding in Fading Channels

    Directory of Open Access Journals (Sweden)

    Wang Xiaodong

    2002-01-01

    Full Text Available Space-time differential coding (STDC is an effective technique for exploiting transmitter diversity while it does not require the channel state information at the receiver. However, like conventional differential modulation schemes, it exhibits an error floor in fading channels. In this paper, we develop an STDC decoding technique based on multiple-symbol detection and decision-feedback, which makes use of the second-order statistic of the fading processes and has a very low computational complexity. This decoding method can significantly lower the error floor of the conventional STDC decoding algorithm, especially in fast fading channels. The application of the proposed multiple-symbol decision-feedback STDC decoding technique in orthogonal frequency-division multiplexing (OFDM system is also discussed.

  18. Detecting Renibacterium salmoninarum in wild brown trout by use of multiple organ samples and diagnostic methods

    Science.gov (United States)

    Guomundsdottir, S.; Applegate, Lynn M.; Arnason, I.O.; Kristmundsson, A.; Purcell, Maureen K.; Elliott, Diane G.

    2017-01-01

    Renibacterium salmoninarum, the causative agent of salmonid bacterial kidney disease (BKD), is endemic in many wild trout species in northerly regions. The aim of the present study was to determine the optimal R. salmoninarum sampling/testing strategy for wild brown trout (Salmo trutta L.) populations in Iceland. Fish were netted in a lake and multiple organs—kidney, spleen, gills, oesophagus and mid-gut—were sampled and subjected to five detection tests i.e. culture, polyclonal enzyme-linked immunosorbent assay (pELISA) and three different PCR tests. The results showed that each fish had encountered R. salmoninarum but there were marked differences between results obtained depending on organ and test. The bacterium was not cultured from any kidney sample while all kidney samples were positive by pELISA. At least one organ from 92.9% of the fish tested positive by PCR. The results demonstrated that the choice of tissue and diagnostic method can dramatically influence the outcome of R. salmoninarum surveys.

  19. Sleep Management on Multiple Machines for Energy and Flow Time

    DEFF Research Database (Denmark)

    Chan, Sze-Hang; Lam, Tak-Wah; Lee, Lap Kei

    2011-01-01

    In large data centers, determining the right number of operating machines is often non-trivial, especially when the workload is unpredictable. Using too many machines would waste energy, while using too few would affect the performance. This paper extends the traditional study of online flow-time...... scheduling on multiple machines to take sleep management and energy into consideration. Specifically, we study online algorithms that can determine dynamically when and which subset of machines should wake up (or sleep), and how jobs are dispatched and scheduled. We consider schedules whose objective...... is to minimize the sum of flow time and energy, and obtain O(1)-competitive algorithms for two settings: one assumes machines running at a fixed speed, and the other allows dynamic speed scaling to further optimize energy usage. Like the previous work on the tradeoff between flow time and energy, the analysis...

  20. COMPARISON OF IMMUNOASSAY AND GAS CHROMATOGRAPHY/MASS SPECTROMETRY METHODS FOR MEASURING 3,5,6-TRICHLORO-2PYRIDINOL IN MULTIPLE SAMPLE MEDIA

    Science.gov (United States)

    Two enzyme-linked immunosorbent assay (ELISA) methods were evaluated for the determination of 3,5,6-trichloro-2-pyridinol (3,5,6-TCP) in multiple sample media (dust, soil, food, and urine). The dust and soil samples were analyzed by a commercial RaPID immunoassay testing kit. ...

  1. Semi-blind identification of wideband MIMO channels via stochastic sampling

    OpenAIRE

    Andrieu, Christophe; Piechocki, Robert J.; McGeehan, Joe P.; Armour, Simon M.

    2003-01-01

    In this paper we address the problem of wide-band multiple-input multiple-output (MIMO) channel (multidimensional time invariant FIR filter) identification using Markov chains Monte Carlo methods. Towards this end we develop a novel stochastic sampling technique that produces a sequence of multidimensional channel samples. The method is semi-blind in the sense that it uses a very short training sequence. In such a framework the problem is no longer analytically tractable; hence we resort to s...

  2. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  3. The real-time fitting of radioactive decay curves. Pt. 3. Counting during sampling

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1994-01-01

    An analysis of a least-squares method for the real-time fitting of the theoretical total count function to the actual total count from radioactive decays has been given previously for the case where counting takes place after a sample is taken. The counting may be done in a number of different counting systems which distinguish between different types or energies of radiation emitted from the sample. The method would allow real-time determination of the numbers of atoms and hence activities of the individual isotopes present and has been designated the Time Evolved Least-Squares method (TELS). If the radioactivity which is to be measured exists as an aerosol or in a form where a sample is taken at a constant rate it may be possible to count during sampling and by so doing reduce the total time required to determine the activity of the individual isotopes present. The TELS method is extended here to the case where counting and the evaluation of the activity takes place concurrently with the sampling. The functions which need to be evaluated are derived and the calculations required to implement the method are discussed. As with the TELS method of counting after sampling the technique of counting during sampling and the simultaneous evaluation of activity could be achieved in real-time. Results of testing the method by computer simulation for two counting schemes for the descendants of radon are presented. ((orig.))

  4. Multiple Time Series Forecasting Using Quasi-Randomized Functional Link Neural Networks

    Directory of Open Access Journals (Sweden)

    Thierry Moudiki

    2018-03-01

    Full Text Available We are interested in obtaining forecasts for multiple time series, by taking into account the potential nonlinear relationships between their observations. For this purpose, we use a specific type of regression model on an augmented dataset of lagged time series. Our model is inspired by dynamic regression models (Pankratz 2012, with the response variable’s lags included as predictors, and is known as Random Vector Functional Link (RVFL neural networks. The RVFL neural networks have been successfully applied in the past, to solving regression and classification problems. The novelty of our approach is to apply an RVFL model to multivariate time series, under two separate regularization constraints on the regression parameters.

  5. Cone penetrometer testing and discrete-depth groundwater sampling techniques: A cost-effective method of site characterization in a multiple-aquifer setting

    International Nuclear Information System (INIS)

    Zemo, D.A.; Pierce, Y.G.; Gallinatti, J.D.

    1992-01-01

    Cone penetrometer testing (CPT), combined with discrete-depth groundwater sampling methods, can reduce significantly the time and expense required to characterize large sites that have multiple aquifers. Results from the screening site characterization can be used to design and install a cost-effective monitoring well network. At a site in northern California, it was necessary to characterize the stratigraphy and the distribution of volatile organic compounds (VOCs) to a depth of 80 feet within a 1/2 mile-by-1/4-mile residential and commercial area in a complex alluvial fan setting. To expedite site characterization, a five-week field screening program was implemented that consisted of a shallow groundwater survey, CPT soundings, and discrete-depth groundwater sampling. Based on continuous lithologic information provided by the CPT soundings, four coarse-grained water-yielding sedimentary packages were identified. Eighty-three discrete-depth groundwater samples were collected using shallow groundwater survey techniques, the BAT Enviroprobe, or the QED HydroPunch 1, depending on subsurface conditions. A 20-well monitoring network was designed and installed to monitor critical points within each sedimentary package. Understanding the vertical VOC distribution and concentrations produced substantial cost savings by minimizing the number of permanent monitoring wells and reducing the number of costly conductor casings to be installed. Significant long-term cost savings will result from reduced sampling costs. Where total VOC concentrations exceeded 20 φg/l in the screening samples, a good correlation was found between the discrete-depth screening data and data from monitoring wells. Using a screening program to characterize the site before installing monitoring wells resulted in an estimated 50-percent reduction in costs for site characterization, 65-percent reduction in time for site characterization, and 50-percent reduction in long-term monitoring costs

  6. Discrete-Time Mixing Receiver Architecture for RF-Sampling Software-Defined Radio

    NARCIS (Netherlands)

    Ru, Z.; Klumperink, Eric A.M.; Nauta, Bram

    2010-01-01

    Abstract—A discrete-time (DT) mixing architecture for RF-sampling receivers is presented. This architecture makes RF sampling more suitable for software-defined radio (SDR) as it achieves wideband quadrature demodulation and wideband harmonic rejection. The paper consists of two parts. In the first

  7. Synchronizing data from irregularly sampled sensors

    Science.gov (United States)

    Uluyol, Onder

    2017-07-11

    A system and method include receiving a set of sampled measurements for each of multiple sensors, wherein the sampled measurements are at irregular intervals or different rates, re-sampling the sampled measurements of each of the multiple sensors at a higher rate than one of the sensor's set of sampled measurements, and synchronizing the sampled measurements of each of the multiple sensors.

  8. Stochastic resonance in a time-delayed mono-stable system with correlated multiplicative and additive white noise

    International Nuclear Information System (INIS)

    Zhou Yu-Rong

    2011-01-01

    This paper considers the stochastic resonance for a time-delayed mono-stable system, driven by correlated multiplicative and additive white noise. It finds that the output signal-to-noise ratio (SNR) varies non-monotonically with the delayed times. The SNR varies non-monotonically with the increase of the intensities of the multiplicative and additive noise, with the increase of the correlation strength between the two noises, as well as with the system parameter. (general)

  9. Real-Time Observation of Ultrafast Intraband Relaxation and Exciton Multiplication in PbS Quantum Dots

    KAUST Repository

    El-Ballouli, Ala’a O.

    2014-03-19

    We examine ultrafast intraconduction band relaxation and multiple-exciton generation (MEG) in PbS quantum dots (QDs) using transient absorption spectroscopy with 120 fs temporal resolution. The intraconduction band relaxation can be directly and excellently resolved spectrally and temporally by applying broadband pump-probe spectroscopy to excite and detect the wavelengths around the exciton absorption peak, which is located in the near-infrared region. The time-resolved data unambiguously demonstrate that the intraband relaxation time progressively increases as the pump-photon energy increases. Moreover, the relaxation time becomes much shorter as the size of the QDs decreases, indicating the crucial role of spatial confinement in the intraband relaxation process. Additionally, our results reveal the systematic scaling of the intraband relaxation time with both excess energy above the effective energy band gap and QD size. We also assess MEG in different sizes of the QDs. Under the condition of high-energy photon excitation, which is well above the MEG energy threshold, ultrafast bleach recovery due to the nonradiative Auger recombination of the multiple electron-hole pairs provides conclusive experimental evidence for the presence of MEG. For instance, we achieved quantum efficiencies of 159, 129 and 106% per single-absorbed photon at pump photoexcition of three times the band gap for QDs with band gaps of 880 nm (1.41 eV), 1000 nm (1.24 eV) and 1210 nm (1.0 eV), respectively. These findings demonstrate clearly that the efficiency of transferring excess photon energy to carrier multiplication is significantly increased in smaller QDs compared with larger ones. Finally, we discuss the Auger recombination dynamics of the multiple electron-hole pairs as a function of QD size.

  10. Fission time-scale in experiments and in multiple initiation model

    Energy Technology Data Exchange (ETDEWEB)

    Karamian, S. A., E-mail: karamian@nrmail.jinr.ru [Joint Institute for Nuclear Research (Russian Federation)

    2011-12-15

    Rate of fission for highly-excited nuclei is affected by the viscose character of the systemmotion in deformation coordinates as was reported for very heavy nuclei with Z{sub C} > 90. The long time-scale of fission can be described in a model of 'fission by diffusion' that includes an assumption of the overdamped diabatic motion. The fission-to-spallation ratio at intermediate proton energy could be influenced by the viscosity, as well. Within a novel approach of the present work, the cross examination of the fission probability, time-scales, and pre-fission neutron multiplicities is resulted in the consistent interpretation of a whole set of the observables. Earlier, different aspects could be reproduced in partial simulations without careful coordination.

  11. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  12. Presence and significant determinants of cognitive impairment in a large sample of patients with multiple sclerosis.

    Directory of Open Access Journals (Sweden)

    Martina Borghi

    Full Text Available OBJECTIVES: To investigate the presence and the nature of cognitive impairment in a large sample of patients with Multiple Sclerosis (MS, and to identify clinical and demographic determinants of cognitive impairment in MS. METHODS: 303 patients with MS and 279 healthy controls were administered the Brief Repeatable Battery of Neuropsychological tests (BRB-N; measures of pre-morbid verbal competence and neuropsychiatric measures were also administered. RESULTS: Patients and healthy controls were matched for age, gender, education and pre-morbid verbal Intelligence Quotient. Patients presenting with cognitive impairment were 108/303 (35.6%. In the overall group of participants, the significant predictors of the most sensitive BRB-N scores were: presence of MS, age, education, and Vocabulary. The significant predictors when considering MS patients only were: course of MS, age, education, vocabulary, and depression. Using logistic regression analyses, significant determinants of the presence of cognitive impairment in relapsing-remitting MS patients were: duration of illness (OR = 1.053, 95% CI = 1.010-1.097, p = 0.015, Expanded Disability Status Scale score (OR = 1.247, 95% CI = 1.024-1.517, p = 0.028, and vocabulary (OR = 0.960, 95% CI = 0.936-0.984, p = 0.001, while in the smaller group of progressive MS patients these predictors did not play a significant role in determining the cognitive outcome. CONCLUSIONS: Our results corroborate the evidence about the presence and the nature of cognitive impairment in a large sample of patients with MS. Furthermore, our findings identify significant clinical and demographic determinants of cognitive impairment in a large sample of MS patients for the first time. Implications for further research and clinical practice were discussed.

  13. Electro-optic sampling for time resolving relativistic ultrafast electron diffraction

    International Nuclear Information System (INIS)

    Scoby, C. M.; Musumeci, P.; Moody, J.; Gutierrez, M.; Tran, T.

    2009-01-01

    The Pegasus laboratory at UCLA features a state-of-the-art electron photoinjector capable of producing ultrashort (<100 fs) high-brightness electron bunches at energies of 3.75 MeV. These beams recently have been used to produce static diffraction patterns from scattering off thin metal foils, and it is foreseen to take advantage of the ultrashort nature of these bunches in future pump-probe time-resolved diffraction studies. In this paper, single shot 2-d electro-optic sampling is presented as a potential technique for time of arrival stamping of electron bunches used for diffraction. Effects of relatively low bunch charge (a few 10's of pC) and modestly relativistic beams are discussed and background compensation techniques to obtain high signal-to-noise ratio are explored. From these preliminary tests, electro-optic sampling is suitable to be a reliable nondestructive time stamping method for relativistic ultrafast electron diffraction at the Pegasus lab.

  14. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  15. Electro-optical time gating based on Mach-Zehnder modulator for multiple access interference elimination in optical code-division multiple access networks

    Science.gov (United States)

    Chen, Yinfang; Wang, Rong; Fang, Tao; Pu, Tao; Xiang, Peng; Zheng, Jilin; Zhu, Huatao

    2014-05-01

    An electro-optical time gating technique, which is based on an electrical return-to-zero (RZ) pulse driven Mach-Zehnder modulator (MZM) for eliminating multiple access interference (MAI) in optical code-division multiple access (OCDMA) networks is proposed. This technique is successfully simulated in an eight-user two-dimensional wavelength-hopping time-spreading system, as well as in a three-user temporal phase encoding system. Results show that in both systems the MAI noise is efficiently removed and the average received power penalty improved. Both achieve error-free transmissions at a bit rate of 2.5 Gb/s. In addition, we also individually discuss effects of parameters in two systems, such as the extinction ratio of the MZM, the duty cycle of the driven RZ pulse, and the time misalignment between the driven pulse and the decoded autocorrelation peak, on the output bit error rate performance. Our work shows that employing a common MZM as a thresholder provides another probability and an interesting cost-effective choice for a smart size, low energy, and less complex thresholding technique for integrated detection in OCDMA networks.

  16. Characterizing multiple solutions to the time-energy canonical commutation relation via internal symmetries

    International Nuclear Information System (INIS)

    Caballar, Roland Cristopher F.; Ocampo, Leonard R.; Galapon, Eric A.

    2010-01-01

    Internal symmetries can be used to classify multiple solutions to the time-energy canonical commutation relation (TE-CCR). The dynamical behavior of solutions to the TE-CCR possessing particular internal symmetries involving time reversal differ significantly from solutions to the TE-CCR without those particular symmetries, implying a connection between the internal symmetries of a quantum system, its internal unitary dynamics, and the TE-CCR.

  17. Effect of Sample Storage Temperature and Time Delay on Blood Gases, Bicarbonate and pH in Human Arterial Blood Samples.

    Science.gov (United States)

    Mohammadhoseini, Elham; Safavi, Enayat; Seifi, Sepideh; Seifirad, Soroush; Firoozbakhsh, Shahram; Peiman, Soheil

    2015-03-01

    Results of arterial blood gas analysis can be biased by pre-analytical factors, such as time interval before analysis, temperature during storage and syringe type. To investigate the effects of samples storage temperature and time delay on blood gases, bicarbonate and PH results in human arterial blood samples. 2.5 mL arterial blood samples were drawn from 45 patients via an indwelling Intraarterial catheter. Each sample was divided into five equal samples and stored in multipurpose tuberculin plastic syringes. Blood gas analysis was performed on one of five samples as soon as possible. Four other samples were divided into two groups stored at 22°C and 0°C. Blood gas analyses were repeated at 30 and 60 minutes after sampling. PaO2 of the samples stored at 0°C was increased significantly after 60 minutes (P = 0.007). The PaCO2 of the samples kept for 30 and 60 minutes at 22°C was significantly higher than primary result (P = 0.04, P samples stored at 22°C, pH decreased significantly after 30 and 60 minutes (P = 0.017, P = 0.001). There were no significant differences in other results of samples stored at 0°C or 22°C after 30 or 60 minutes. In samples stored in plastic syringes, overestimation of PaO2 levels should be noted if samples cooled before analysis. In samples stored in plastic syringes, it is not necessary to store samples in iced water when analysis delayed up to one hour.

  18. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  19. Optimizing the diagnostic power with gastric emptying scintigraphy at multiple time points

    Directory of Open Access Journals (Sweden)

    Gajewski Byron J

    2011-05-01

    Full Text Available Abstract Background Gastric Emptying Scintigraphy (GES at intervals over 4 hours after a standardized radio-labeled meal is commonly regarded as the gold standard for diagnosing gastroparesis. The objectives of this study were: 1 to investigate the best time point and the best combination of multiple time points for diagnosing gastroparesis with repeated GES measures, and 2 to contrast and cross-validate Fisher's Linear Discriminant Analysis (LDA, a rank based Distribution Free (DF approach, and the Classification And Regression Tree (CART model. Methods A total of 320 patients with GES measures at 1, 2, 3, and 4 hour (h after a standard meal using a standardized method were retrospectively collected. Area under the Receiver Operating Characteristic (ROC curve and the rate of false classification through jackknife cross-validation were used for model comparison. Results Due to strong correlation and an abnormality in data distribution, no substantial improvement in diagnostic power was found with the best linear combination by LDA approach even with data transformation. With DF method, the linear combination of 4-h and 3-h increased the Area Under the Curve (AUC and decreased the number of false classifications (0.87; 15.0% over individual time points (0.83, 0.82; 15.6%, 25.3%, for 4-h and 3-h, respectively at a higher sensitivity level (sensitivity = 0.9. The CART model using 4 hourly GES measurements along with patient's age was the most accurate diagnostic tool (AUC = 0.88, false classification = 13.8%. Patients having a 4-h gastric retention value >10% were 5 times more likely to have gastroparesis (179/207 = 86.5% than those with ≤10% (18/113 = 15.9%. Conclusions With a mixed group of patients either referred with suspected gastroparesis or investigated for other reasons, the CART model is more robust than the LDA and DF approaches, capable of accommodating covariate effects and can be generalized for cross institutional applications, but

  20. Simulation of multi-photon emission isotopes using time-resolved SimSET multiple photon history generator

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Chih-Chieh; Lin, Hsin-Hon; Lin, Chang-Shiun; Chuang, Keh-Shih [Department of Biomedical Engineering and Environmental Sciences, National Tsing-HuaUniversity, Hsinchu, Taiwan (China); Jan, Meei-Ling [Health Physics Division, Institute of Nuclear Energy Research, Atomic Energy Council, Taoyuan, Taiwan (China)

    2015-07-01

    Abstract-Multiple-photon emitters, such as In-111 or Se-75, have enormous potential in the field of nuclear medicine imaging. For example, Se-75 can be used to investigate the bile acid malabsorption and measure the bile acid pool loss. The simulation system for emission tomography (SimSET) is a well-known Monte Carlo simulation (MCS) code in nuclear medicine for its high computational efficiency. However, current SimSET cannot simulate these isotopes due to the lack of modeling of complex decay scheme and the time-dependent decay process. To extend the versatility of SimSET for simulation of those multi-photon emission isotopes, a time-resolved multiple photon history generator based on SimSET codes is developed in present study. For developing the time-resolved SimSET (trSimSET) with radionuclide decay process, the new MCS model introduce new features, including decay time information and photon time-of-flight information, into this new code. The half-life of energy states were tabulated from the Evaluated Nuclear Structure Data File (ENSDF) database. The MCS results indicate that the overall percent difference is less than 8.5% for all simulation trials as compared to GATE. To sum up, we demonstrated that time-resolved SimSET multiple photon history generator can have comparable accuracy with GATE and keeping better computational efficiency. The new MCS code is very useful to study the multi-photon imaging of novel isotopes that needs the simulation of lifetime and the time-of-fight measurements. (authors)

  1. Learning Bounds of ERM Principle for Sequences of Time-Dependent Samples

    Directory of Open Access Journals (Sweden)

    Mingchen Yao

    2015-01-01

    Full Text Available Many generalization results in learning theory are established under the assumption that samples are independent and identically distributed (i.i.d.. However, numerous learning tasks in practical applications involve the time-dependent data. In this paper, we propose a theoretical framework to analyze the generalization performance of the empirical risk minimization (ERM principle for sequences of time-dependent samples (TDS. In particular, we first present the generalization bound of ERM principle for TDS. By introducing some auxiliary quantities, we also give a further analysis of the generalization properties and the asymptotical behaviors of ERM principle for TDS.

  2. Specification and testing of Multiplicative Time-Varying GARCH models with applications

    DEFF Research Database (Denmark)

    Amado, Cristina; Teräsvirta, Timo

    2017-01-01

    In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smooth...... is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns....

  3. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    Science.gov (United States)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  4. Multiple time scale analysis of pressure oscillations in solid rocket motors

    Science.gov (United States)

    Ahmed, Waqas; Maqsood, Adnan; Riaz, Rizwan

    2018-03-01

    In this study, acoustic pressure oscillations for single and coupled longitudinal acoustic modes in Solid Rocket Motor (SRM) are investigated using Multiple Time Scales (MTS) method. Two independent time scales are introduced. The oscillations occur on fast time scale whereas the amplitude and phase changes on slow time scale. Hopf bifurcation is employed to investigate the properties of the solution. The supercritical bifurcation phenomenon is observed for linearly unstable system. The amplitude of the oscillations result from equal energy gain and loss rates of longitudinal acoustic modes. The effect of linear instability and frequency of longitudinal modes on amplitude and phase of oscillations are determined for both single and coupled modes. For both cases, the maximum amplitude of oscillations decreases with the frequency of acoustic mode and linear instability of SRM. The comparison of analytical MTS results and numerical simulations demonstrate an excellent agreement.

  5. A high-pressure thermal gradient block for investigating microbial activity in multiple deep-sea samples

    DEFF Research Database (Denmark)

    Kallmeyer, J.; Ferdelman, TG; Jansen, KH

    2003-01-01

    Details about the construction and use of a high-pressure thermal gradient block for the simultaneous incubation of multiple samples are presented. Most parts used are moderately priced off-the-shelf components that easily obtainable. In order to keep the pressure independent of thermal expansion...... range of temperatures and pressures and can easily be modified to accommodate different experiments, either biological or chemical. As an application, we present measurements of bacterial sulfate reduction rates in hydrothermal sediments from Guyamas Basin over a wide range of temperatures and pressures...

  6. It's Deja Vu All over Again: Using Multiple-Spell Discrete-Time Survival Analysis.

    Science.gov (United States)

    Willett, John B.; Singer, Judith D.

    1995-01-01

    The multiple-spell discrete-time survival analysis method is introduced and illustrated using longitudinal data on exit from and reentry into the teaching profession. The method is applicable to many educational problems involving the sequential occurrence of disparate events or episodes. (SLD)

  7. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    Science.gov (United States)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  8. The influence of single neuron dynamics and network topology on time delay-induced multiple synchronous behaviors in inhibitory coupled network

    International Nuclear Information System (INIS)

    Zhao, Zhiguo; Gu, Huaguang

    2015-01-01

    Highlights: • Time delay-induced multiple synchronous behaviors was simulated in neuronal networks. • Multiple behaviors appear at time delays shorter than a bursting period of neurons. • The more spikes per burst of bursting, the more synchronous regions of time delay. • From regular to random via small-world networks, synchronous degree becomes weak. • An interpretation of the multiple behaviors and the influence of network are provided. - Abstract: Time delay induced-multiple synchronous behaviors are simulated in neuronal network composed of many inhibitory neurons and appear at different time delays shorter than a period of endogenous bursting of individual neurons. It is different from previous investigations wherein only one of multiple synchronous behaviors appears at time delay shorter than a period of endogenous firing and others appear at time delay longer than the period duration. The bursting patterns of the synchronous behaviors are identified based on the dynamics of an individual neuron stimulated by a signal similar to the inhibitory coupling current, which is applied at the decaying branch of a spike and suitable phase within the quiescent state of the endogenous bursting. If a burst of endogenous bursting contains more spikes, the synchronous behaviors appear at more regions of time delay. As the coupling strength increases, the multiple synchronous behaviors appear in a sequence because the different threshold of coupling current or strength is needed to achieve synchronous behaviors. From regular, to small-world, and to random networks, synchronous degree of the multiple synchronous behaviors becomes weak, and synchronous bursting patterns with lower spikes per burst disappear, which is properly interpreted by the difference of coupling current between neurons induced by different degree and the high threshold of coupling current to achieve synchronization for the absent synchronous bursting patterns. The results of the influence of

  9. REAL-TIME PCR DETECTION OF LISTERIA MONOCYTOGENES IN FOOD SAMPLES OF ANIMAL ORIGIN

    Directory of Open Access Journals (Sweden)

    Jaroslav Pochop

    2013-02-01

    Full Text Available The aim of this study was to follow the contamination of food with Listeria monocytogenes by using Step One real time polymerase chain reaction (PCR. We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and SensiFAST SYBR Hi-ROX Kit for the real-time PCR performance. In 24 samples of food of animal origin without incubation were detected strains of Listeria monocytogenes in 15 samples (swabs. Nine samples were negative. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in food of animal origin without incubation. This could prevent infection caused by Listeria monocytogenes, and also could benefit food manufacturing companies by extending their product’s shelf-life as well as saving the cost of warehousing their food products while awaiting pathogen testing results. The rapid real-time PCR-based method performed very well compared to the conventional method. It is a fast, simple, specific and sensitive way to detect nucleic acids, which could be used in clinical diagnostic tests in the future.

  10. Entropic multiple-relaxation-time multirange pseudopotential lattice Boltzmann model for two-phase flow

    Science.gov (United States)

    Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; Derome, Dominique; Carmeliet, Jan

    2018-03-01

    An entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace's law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results. Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.

  11. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  12. An Improved Clutter Suppression Method for Weather Radars Using Multiple Pulse Repetition Time Technique

    Directory of Open Access Journals (Sweden)

    Yingjie Yu

    2017-01-01

    Full Text Available This paper describes the implementation of an improved clutter suppression method for the multiple pulse repetition time (PRT technique based on simulated radar data. The suppression method is constructed using maximum likelihood methodology in time domain and is called parametric time domain method (PTDM. The procedure relies on the assumption that precipitation and clutter signal spectra follow a Gaussian functional form. The multiple interleaved pulse repetition frequencies (PRFs that are used in this work are set to four PRFs (952, 833, 667, and 513 Hz. Based on radar simulation, it is shown that the new method can provide accurate retrieval of Doppler velocity even in the case of strong clutter contamination. The obtained velocity is nearly unbiased for all the range of Nyquist velocity interval. Also, the performance of the method is illustrated on simulated radar data for plan position indicator (PPI scan. Compared with staggered 2-PRT transmission schemes with PTDM, the proposed method presents better estimation accuracy under certain clutter situations.

  13. Estimating time to pregnancy from current durations in a cross-sectional sample

    DEFF Research Database (Denmark)

    Keiding, Niels; Kvist, Kajsa; Hartvig, Helle

    2002-01-01

    A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....

  14. The relative importance of perceptual and memory sampling processes in determining the time course of absolute identification.

    Science.gov (United States)

    Guest, Duncan; Kent, Christopher; Adelman, James S

    2018-04-01

    In absolute identification, the extended generalized context model (EGCM; Kent & Lamberts, 2005, 2016) proposes that perceptual processing determines systematic response time (RT) variability; all other models of RT emphasize response selection processes. In the EGCM-RT the bow effect in RTs (longer responses for stimuli in the middle of the range) occurs because these middle stimuli are less isolated, and as perceptual information is accumulated, the evidence supporting a correct response grows more slowly than for stimuli at the ends of the range. More perceptual information is therefore accumulated in order to increase certainty in response for middle stimuli, lengthening RT. According to the model reducing perceptual sampling time should reduce the size of the bow effect in RT. We tested this hypothesis in 2 pitch identification experiments. Experiment 1 found no effect of stimulus duration on the size of the RT bow. Experiment 2 used multiple short stimulus durations as well as manipulating set size and stimulus spacing. Contrary to EGCM-RT predictions, the bow effect on RTs was large for even very short durations. A new version of the EGCM-RT could only capture this, alongside the effect of stimulus duration on accuracy, by including both a perceptual and a memory sampling process. A modified version of the selective attention, mapping, and ballistic accumulator model (Brown, Marley, Donkin, & Heathcote, 2008) could also capture the data, by assuming psychophysical noise diminishes with increased exposure duration. This modeling suggests systematic variability in RT in absolute identification is largely determined by memory sampling and response selection processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Use of multiple tobacco products in a national sample of persons enrolled in addiction treatment.

    Science.gov (United States)

    Guydish, Joseph; Tajima, Barbara; Pramod, Sowmya; Le, Thao; Gubner, Noah R; Campbell, Barbara; Roman, Paul

    2016-09-01

    To explore use of tobacco products in relationship to marketing exposure among persons in addiction treatment. A random sample of treatment programs was drawn from the National Institute on Drug Abuse (NIDA) Clinical Trials Network (CTN). Participants in each program completed surveys concerning use of tobacco products (N=1113). Exposure to tobacco marketing and counter-marketing, advertising receptivity, and perceived health risks of smoking were tested for their association with use of multiple tobacco products. Prevalence of combustible cigarette use was 77.9%. Weekly or greater use of other products was: e-cigarettes (17.7%), little filtered cigars (8.6%), smokeless tobacco (5.2%), and standard cigars (4.6%) with 24.4% using multiple tobacco products. Compared to single product users, multiple product users smoked more cigarettes per day (OR=1.03, 95% CI 1.01-1.05, padvertising for products other than combustible cigarettes (OR=1.93, CI 1.35-2.75, ptobacco counter-marketing (OR=1.70, 95% CI: 1.09-2.63, p=0.019). Heavier smokers and those trying to quit may be more likely to use e-cigarettes, little filtered cigars, or smokeless tobacco and have greater susceptibility to their advertising. This highlights the importance of regulating advertising related to smoking cessation as their effectiveness for this purpose has not been demonstrated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Synchronization of Hierarchical Time-Varying Neural Networks Based on Asynchronous and Intermittent Sampled-Data Control.

    Science.gov (United States)

    Xiong, Wenjun; Patel, Ragini; Cao, Jinde; Zheng, Wei Xing

    In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.

  18. Using the Direct Sampling Multiple-Point Geostatistical Method for Filling Gaps in Landsat 7 ETM+ SLC-off Imagery

    KAUST Repository

    Yin, Gaohong

    2016-05-01

    Since the failure of the Scan Line Corrector (SLC) instrument on Landsat 7, observable gaps occur in the acquired Landsat 7 imagery, impacting the spatial continuity of observed imagery. Due to the highly geometric and radiometric accuracy provided by Landsat 7, a number of approaches have been proposed to fill the gaps. However, all proposed approaches have evident constraints for universal application. The main issues in gap-filling are an inability to describe the continuity features such as meandering streams or roads, or maintaining the shape of small objects when filling gaps in heterogeneous areas. The aim of the study is to validate the feasibility of using the Direct Sampling multiple-point geostatistical method, which has been shown to reconstruct complicated geological structures satisfactorily, to fill Landsat 7 gaps. The Direct Sampling method uses a conditional stochastic resampling of known locations within a target image to fill gaps and can generate multiple reconstructions for one simulation case. The Direct Sampling method was examined across a range of land cover types including deserts, sparse rural areas, dense farmlands, urban areas, braided rivers and coastal areas to demonstrate its capacity to recover gaps accurately for various land cover types. The prediction accuracy of the Direct Sampling method was also compared with other gap-filling approaches, which have been previously demonstrated to offer satisfactory results, under both homogeneous area and heterogeneous area situations. Studies have shown that the Direct Sampling method provides sufficiently accurate prediction results for a variety of land cover types from homogeneous areas to heterogeneous land cover types. Likewise, it exhibits superior performances when used to fill gaps in heterogeneous land cover types without input image or with an input image that is temporally far from the target image in comparison with other gap-filling approaches.

  19. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  20. Multiple-relaxation-time lattice Boltzmann model for compressible fluids

    International Nuclear Information System (INIS)

    Chen Feng; Xu Aiguo; Zhang Guangcai; Li Yingjun

    2011-01-01

    We present an energy-conserving multiple-relaxation-time finite difference lattice Boltzmann model for compressible flows. The collision step is first calculated in the moment space and then mapped back to the velocity space. The moment space and corresponding transformation matrix are constructed according to the group representation theory. Equilibria of the nonconserved moments are chosen according to the need of recovering compressible Navier-Stokes equations through the Chapman-Enskog expansion. Numerical experiments showed that compressible flows with strong shocks can be well simulated by the present model. The new model works for both low and high speeds compressible flows. It contains more physical information and has better numerical stability and accuracy than its single-relaxation-time version. - Highlights: → We present an energy-conserving MRT finite-difference LB model. → The moment space is constructed according to the group representation theory. → The new model works for both low and high speeds compressible flows. → It has better numerical stability and wider applicable range than its SRT version.

  1. [Construction and analysis of a monitoring system with remote real-time multiple physiological parameters based on cloud computing].

    Science.gov (United States)

    Zhu, Lingyun; Li, Lianjie; Meng, Chunyan

    2014-12-01

    There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.

  2. An empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil.

    Science.gov (United States)

    Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi

    2008-07-01

    Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.

  3. Laboratory sample turnaround times: do they cause delays in the ED?

    Science.gov (United States)

    Gill, Dipender; Galvin, Sean; Ponsford, Mark; Bruce, David; Reicher, John; Preston, Laura; Bernard, Stephani; Lafferty, Jessica; Robertson, Andrew; Rose-Morris, Anna; Stoneham, Simon; Rieu, Romelie; Pooley, Sophie; Weetch, Alison; McCann, Lloyd

    2012-02-01

    Blood tests are requested for approximately 50% of patients attending the emergency department (ED). The time taken to obtain the results is perceived as a common reason for delay. The objective of this study was therefore to investigate the turnaround time (TAT) for blood results and whether this affects patient length of stay (LOS) and to identify potential areas for improvement. A time-in-motion study was performed at the ED of the John Radcliffe Hospital (JRH), Oxford, UK. The duration of each of the stages leading up to receipt of 101 biochemistry and haematology results was recorded, along with the corresponding patient's LOS. The findings reveal that the mean time for haematology results to become available was 1 hour 6 minutes (95% CI: 29 minutes to 2 hours 13 minutes), while biochemistry samples took 1 hour 42 minutes (95% CI: 1 hour 1 minute to 4 hours 21 minutes), with some positive correlation noted with the patient LOS, but no significant variation between different days or shifts. With the fastest 10% of samples being reported within 35 minutes (haematology) and 1 hour 5 minutes (biochemistry) of request, our study showed that delays can be attributable to laboratory TAT. Given the limited ability to further improve laboratory processes, the solutions to improving TAT need to come from a collaborative and integrated approach that includes strategies before samples reach the laboratory and downstream review of results. © 2010 Blackwell Publishing Ltd.

  4. Influence of sampling depth and post-sampling analysis time on the ...

    African Journals Online (AJOL)

    Bacteriological analysis was carried out for samples taken at water depth and at 1, 6, 12 and 24 hours post-sampling. It was observed that the total and faecal coliform bacteria were significantly higher in the 3 m water depth samples than in the surface water samples (ANOVA, F = 59.41, 26.751, 9.82 (T.C); 46.41, 26.81, ...

  5. Simple DNA extraction of urine samples: Effects of storage temperature and storage time.

    Science.gov (United States)

    Ng, Huey Hian; Ang, Hwee Chen; Hoe, See Ying; Lim, Mae-Lynn; Tai, Hua Eng; Soh, Richard Choon Hock; Syn, Christopher Kiu-Choong

    2018-06-01

    Urine samples are commonly analysed in cases with suspected illicit drug consumption. In events of alleged sample mishandling, urine sample source identification may be necessary. A simple DNA extraction procedure suitable for STR typing of urine samples was established on the Promega Maxwell ® 16 paramagnetic silica bead platform. A small sample volume of 1.7mL was used. Samples were stored at room temperature, 4°C and -20°C for 100days to investigate the influence of storage temperature and time on extracted DNA quantity and success rate of STR typing. Samples stored at room temperature exhibited a faster decline in DNA yield with time and lower typing success rates as compared to those at 4°C and -20°C. This trend can likely be attributed to DNA degradation. In conclusion, this study presents a quick and effective DNA extraction protocol from a small urine volume stored for up to 100days at 4°C and -20°C. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Use of multiple age tracers to estimate groundwater residence times and long-term recharge rates in arid southern Oman

    International Nuclear Information System (INIS)

    Müller, Th.; Osenbrück, K.; Strauch, G.; Pavetich, S.; Al-Mashaikhi, K.-S.; Herb, C.; Merchel, S.; Rugel, G.; Aeschbach, W.; Sanford, W.

    2016-01-01

    Multiple age tracers were measured to estimate groundwater residence times in the regional aquifer system underlying southwestern Oman. This area, known as the Najd, is one of the most arid areas in the world and is planned to be the main agricultural center of the Sultanate of Oman in the near future. The three isotopic age tracers "4He, "1"4C and "3"6Cl were measured in waters collected from wells along a line that extended roughly from the Dhofar Mountains near the Arabian Sea northward 400 km into the Empty Quarter of the Arabian Peninsula. The wells sampled were mostly open to the Umm Er Radhuma confined aquifer, although, some were completed in the mostly unconfined Rus aquifer. The combined results from the three tracers indicate the age of the confined groundwater is  100 ka in the central section north of the mountains, and up to and > one Ma in the Empty Quarter. The "1"4C data were used to help calibrate the "4He and "3"6Cl data. Mixing models suggest that long open boreholes north of the mountains compromise "1"4C-only interpretations there, in contrast to "4He and "3"6Cl calculations that are less sensitive to borehole mixing. Thus, only the latter two tracers from these more distant wells were considered reliable. In addition to the age tracers, δ"2H and δ"1"8O data suggest that seasonal monsoon and infrequent tropical cyclones are both substantial contributors to the recharge. The study highlights the advantages of using multiple chemical and isotopic data when estimating groundwater travel times and recharge rates, and differentiating recharge mechanisms. - Highlights: • Multiple age tracers are required for the interpretation of the groundwater system. • Different tracers are applicable along different sections of the flowpath. • Groundwater residence times >1 Ma have been determined for the northern Najd area.

  7. Study on auto-plating process time versus recovery for polonium, Po-210 in environmental sample

    International Nuclear Information System (INIS)

    Jalal Sharib; Zaharudin Ahmad; Abdul Kadir Ishak; Norfaizal Mohamed; Ahmad Sanadi Abu Bakar; Yii Mei Wo; Kamarozaman Ishak; Siti Aminah Yusoff

    2008-08-01

    This study was carried out to evaluate time effectiveness and recovery 16 samples of 4 Kuala Muda stations during auto-plating process procedures for determination Polonium, Po 210 activity concentration in environmental sample. The study was performed using Kuala Muda sediment as sample in the same methodology. The auto-plating process runs for 4, 12, 24 and 30 hours on a silver disc for 4 samples each station, and then counted for one (1) day using an alpha spectrometry counting system. The objectives for this study is to justify on time duration for auto-plating process effecting a chemical yield of Po-209.The results showed recovery are increasing versus time and constantly at 24 hour auto-plating. Its mean, 24 hour is an optimum time for auto-plating process for determination of Polonium, Po 210 activity concentration in environmental sample. (Author)

  8. Multi-Locus Next-Generation Sequence Typing of DNA Extracted From Pooled Colonies Detects Multiple Unrelated Candida albicans Strains in a Significant Proportion of Patient Samples

    Directory of Open Access Journals (Sweden)

    Ningxin Zhang

    2018-06-01

    Full Text Available The yeast Candida albicans is an important opportunistic human pathogen. For C. albicans strain typing or drug susceptibility testing, a single colony recovered from a patient sample is normally used. This is insufficient when multiple strains are present at the site sampled. How often this is the case is unclear. Previous studies, confined to oral, vaginal and vulvar samples, have yielded conflicting results and have assessed too small a number of colonies per sample to reliably detect the presence of multiple strains. We developed a next-generation sequencing (NGS modification of the highly discriminatory C. albicans MLST (multilocus sequence typing method, 100+1 NGS-MLST, for detection and typing of multiple strains in clinical samples. In 100+1 NGS-MLST, DNA is extracted from a pool of colonies from a patient sample and also from one of the colonies. MLST amplicons from both DNA preparations are analyzed by high-throughput sequencing. Using base call frequencies, our bespoke DALMATIONS software determines the MLST type of the single colony. If base call frequency differences between pool and single colony indicate the presence of an additional strain, the differences are used to computationally infer the second MLST type without the need for MLST of additional individual colonies. In mixes of previously typed pairs of strains, 100+1 NGS-MLST reliably detected a second strain. Inferred MLST types of second strains were always more similar to their real MLST types than to those of any of 59 other isolates (22 of 31 inferred types were identical to the real type. Using 100+1 NGS-MLST we found that 7/60 human samples, including three superficial candidiasis samples, contained two unrelated strains. In addition, at least one sample contained two highly similar variants of the same strain. The probability of samples containing unrelated strains appears to differ considerably between body sites. Our findings indicate the need for wider surveys to

  9. Effects of Long-Term Storage Time and Original Sampling Month on Biobank Plasma Protein Concentrations

    Directory of Open Access Journals (Sweden)

    Stefan Enroth

    2016-10-01

    Full Text Available The quality of clinical biobank samples is crucial to their value for life sciences research. A number of factors related to the collection and storage of samples may affect the biomolecular composition. We have studied the effect of long-time freezer storage, chronological age at sampling, season and month of the year and on the abundance levels of 108 proteins in 380 plasma samples collected from 106 Swedish women. Storage time affected 18 proteins and explained 4.8–34.9% of the observed variance. Chronological age at sample collection after adjustment for storage-time affected 70 proteins and explained 1.1–33.5% of the variance. Seasonal variation had an effect on 15 proteins and month (number of sun hours affected 36 proteins and explained up to 4.5% of the variance after adjustment for storage-time and age. The results show that freezer storage time and collection date (month and season exerted similar effect sizes as age on the protein abundance levels. This implies that information on the sample handling history, in particular storage time, should be regarded as equally prominent covariates as age or gender and need to be included in epidemiological studies involving protein levels.

  10. Scheduling sampling to maximize information about time dependence in experiments with limited resources

    DEFF Research Database (Denmark)

    Græsbøll, Kaare; Christiansen, Lasse Engbo

    2013-01-01

    Looking for periodicity in sampled data requires that periods (lags) of different length are represented in the sampling plan. We here present a method to assist in planning of temporal studies with sparse resources, which optimizes the number of observed time lags for a fixed amount of samples w...

  11. Relaxation time measurements of white and grey matter in multiple sclerosis patients

    International Nuclear Information System (INIS)

    Rinck, P.A.; Appel, B.; Moens, E.; Academisch Ziekenhuis Middelheim, Antwerp

    1987-01-01

    In a patient population of some 450 with definite, probable, and possible multiple sclerosis referred to us for MRI, some 40 suffering from definite MS were chosen randomly for relaxation time measurements of plaque-free grey and white matter. T 1 values could not be used for diagnostic purposes owing to their broad standard deviation. Overall white matter T 2 was slightly higher in MS patients than in a non-MS population (94 ms versus 89 ms). Because these changes are not visible in MR images, relaxation time measurements may prove valuable for differential diagnosis. (orig.) [de

  12. Surgery for Pulmonary Multiple Ground Glass Opacities

    Directory of Open Access Journals (Sweden)

    Qun WANG

    2016-06-01

    Full Text Available The incidence of pulmonary ground glass opacity (GGO has been increasing in recent years, with a great number of patients having multiple GGOs. Unfortunately, the management of multiple GGOs is still controversial. Pulmonary GGO is a radiological term, consisting of different pathological types. Some of the GGOs are early-staged lung cancer. GGO is an indolent nodule, only a small proportion of GGOs change during observation, which does not influence the efficacy of surgery. . The timing of surgery for multiple GGOs mainly depends on the predominant nodule and surgery is recommended if the solid component of the predominant nodule >5 mm. Either lobectomy or sub-lobectomy is feasible. GGOs other than the predominant nodule can be left unresected. Multiple GGOs with high risk factors need mediastinal lymph node dissection or sampling.

  13. A sampling approach to constructing Lyapunov functions for nonlinear continuous–time systems

    NARCIS (Netherlands)

    Bobiti, R.V.; Lazar, M.

    2016-01-01

    The problem of constructing a Lyapunov function for continuous-time nonlinear dynamical systems is tackled in this paper via a sampling-based approach. The main idea of the sampling-based method is to verify a Lyapunov-type inequality for a finite number of points (known state vectors) in the

  14. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  15. Development and tests of MCP based timing and multiplicity detector for MIPs

    Science.gov (United States)

    Feofilov, G.; Kondratev, V.; Stolyarov, O.; Tulina, T.; Valiev, F.; Vinogradov, L.

    2017-01-01

    We present summary of technological developments and tests of the MCP based large area detector aimed at precise timing and charged particles multiplicity measurements. Results obtained in course of these developments of isochronous (simultaneity) precise signal readout, passive summation of 1 ns signals, fast (1 GHz) front-end electronics, miniature vacuum systems, etc. could be potentially interesting for a number of future applications in different fields.

  16. The influence of Multiple Goals on Driving Behavior: the case of Safety, Time Saving, and Fuel Saving

    OpenAIRE

    DOGAN, Ebru; STEG, Linda; DELHOMME, Patricia

    2011-01-01

    Due to the innate complexity of the task drivers have to manage multiple goals while driving and the importance of certain goals may vary over time leading to priority being given to different goals depending on the circumstances. This study aimed to investigate drivers' behavioral regulation while managing multiple goals during driving. To do so participants drove on urban and rural roads in a driving simulator while trying to manage fuel saving and time saving goals, besides the safety goal...

  17. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  18. Classification of Small-Scale Eucalyptus Plantations Based on NDVI Time Series Obtained from Multiple High-Resolution Datasets

    Directory of Open Access Journals (Sweden)

    Hailang Qiao

    2016-02-01

    Full Text Available Eucalyptus, a short-rotation plantation, has been expanding rapidly in southeast China in recent years owing to its short growth cycle and high yield of wood. Effective identification of eucalyptus, therefore, is important for monitoring land use changes and investigating environmental quality. For this article, we used remote sensing images over 15 years (one per year with a 30-m spatial resolution, including Landsat 5 thematic mapper images, Landsat 7-enhanced thematic mapper images, and HJ 1A/1B images. These data were used to construct a 15-year Normalized Difference Vegetation Index (NDVI time series for several cities in Guangdong Province, China. Eucalyptus reference NDVI time series sub-sequences were acquired, including one-year-long and two-year-long growing periods, using invested eucalyptus samples in the study region. In order to compensate for the discontinuity of the NDVI time series that is a consequence of the relatively coarse temporal resolution, we developed an inverted triangle area methodology. Using this methodology, the images were classified on the basis of the matching degree of the NDVI time series and two reference NDVI time series sub-sequences during the growing period of the eucalyptus rotations. Three additional methodologies (Bounding Envelope, City Block, and Standardized Euclidian Distance were also tested and used as a comparison group. Threshold coefficients for the algorithms were adjusted using commission–omission error criteria. The results show that the triangle area methodology out-performed the other methodologies in classifying eucalyptus plantations. Threshold coefficients and an optimal discriminant function were determined using a mosaic photograph that had been taken by an unmanned aerial vehicle platform. Good stability was found as we performed further validation using multiple-year data from the high-resolution Gaofen Satellite 1 (GF-1 observations of larger regions. Eucalyptus planting dates

  19. Determine Multiple Elements Simultaneously in the Sera of Umbilical Cord Blood Samples-a Very Simple Method.

    Science.gov (United States)

    Liang, Chunmei; Li, Zhijuan; Xia, Xun; Wang, Qunan; Tao, Ruiwen; Tao, Yiran; Xiang, Haiyun; Tong, Shilu; Tao, Fangbiao

    2017-05-01

    Analyzing the concentrations of heavy metals in the sera of umbilical cord blood samples can provide useful information about prenatal exposure to environmental agents. An analytical method based on ICP-MS to simultaneously determine multiple elements in umbilical cord blood samples was developed for assessing the in utero exposure to metallic and metalloid elements. The method only required as little as 100 μL of serum diluted 1:25 for direct analysis. Matrix-matched protocol was used to eliminate mass matrix interference and kinetic energy discrimination mode was used to eliminate the polyatomic ion interference. The assay was completed on average within 4 min with the detection limits ranging from 0.0002 to 44.4 μg/L for all the targeted elements. The detection rates for most of elements were 100 % other than cadmium (Cd), lead (Pb), and mercury (Hg). The testing results of the certified reference materials were ideal. The method is simple and sensitive, so it is suitable for the monitoring of large quantities of samples.

  20. Automated high-throughput flow-through real-time diagnostic system

    Science.gov (United States)

    Regan, John Frederick

    2012-10-30

    An automated real-time flow-through system capable of processing multiple samples in an asynchronous, simultaneous, and parallel fashion for nucleic acid extraction and purification, followed by assay assembly, genetic amplification, multiplex detection, analysis, and decontamination. The system is able to hold and access an unlimited number of fluorescent reagents that may be used to screen samples for the presence of specific sequences. The apparatus works by associating extracted and purified sample with a series of reagent plugs that have been formed in a flow channel and delivered to a flow-through real-time amplification detector that has a multiplicity of optical windows, to which the sample-reagent plugs are placed in an operative position. The diagnostic apparatus includes sample multi-position valves, a master sample multi-position valve, a master reagent multi-position valve, reagent multi-position valves, and an optical amplification/detection system.

  1. Disposable MoS2-Arrayed MALDI MS Chip for High-Throughput and Rapid Quantification of Sulfonamides in Multiple Real Samples.

    Science.gov (United States)

    Zhao, Yaju; Tang, Minmin; Liao, Qiaobo; Li, Zhoumin; Li, Hui; Xi, Kai; Tan, Li; Zhang, Mei; Xu, Danke; Chen, Hong-Yuan

    2018-04-27

    In this work, we demonstrate, for the first time, the development of a disposable MoS 2 -arrayed matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS) chip combined with an immunoaffinity enrichment method for high-throughput, rapid, and simultaneous quantitation of multiple sulfonamides (SAs). The disposable MALDI MS chip was designed and fabricated by MoS 2 array formation on a commercial indium tin oxide (ITO) glass slide. A series of SAs were analyzed, and clear deprotonated signals were obtained in negative-ion mode. Compared with MoS 2 -arrayed commercial steel plate, the prepared MALDI MS chip exhibited comparable LDI efficiency, providing a good alternative and disposable substrate for MALDI MS analysis. Furthermore, internal standard (IS) was previously deposited onto the MoS 2 array to simplify the experimental process for MALDI MS quantitation. 96 sample spots could be analyzed within 10 min in one single chip to perform quantitative analysis, recovery studies, and real foodstuff detection. Upon targeted extraction and enrichment by antibody conjugated magnetic beads, five SAs were quantitatively determined by the IS-first method with the linear range of 0.5-10 ng/mL ( R 2 > 0.990). Good recoveries and repeatability were obtained for spiked pork, egg, and milk samples. SAs in several real foodstuffs were successfully identified and quantified. The developed method may provide a promising tool for the routine analysis of antibiotic residues in real samples.

  2. Relationship between timed 25-foot walk and diffusion tensor imaging in multiple sclerosis.

    Science.gov (United States)

    Klineova, Sylvia; Farber, Rebecca; Saiote, Catarina; Farrell, Colleen; Delman, Bradley N; Tanenbaum, Lawrence N; Friedman, Joshua; Inglese, Matilde; Lublin, Fred D; Krieger, Stephen

    2016-01-01

    The majority of multiple sclerosis patients experience impaired walking ability, which impacts quality of life. Timed 25-foot walk is commonly used to gauge gait impairment but results can be broadly variable. Objective biological markers that correlate closely with patients' disability are needed. Diffusion tensor imaging, quantifying fiber tract integrity, might provide such information. In this project we analyzed relationships between timed 25-foot walk, conventional and diffusion tensor imaging magnetic resonance imaging markers. A cohort of gait impaired multiple sclerosis patients underwent brain and cervical spinal cord magnetic resonance imaging. Diffusion tensor imaging mean diffusivity and fractional anisotropy were measured on the brain corticospinal tracts and spinal restricted field of vision at C2/3. We analyzed relationships between baseline timed 25-foot walk, conventional and diffusion tensor imaging magnetic resonance imaging markers. Multivariate linear regression analysis showed a statistically significant association between several magnetic resonance imaging and diffusion tensor imaging metrics and timed 25-foot walk: brain mean diffusivity corticospinal tracts (p = 0.004), brain corticospinal tracts axial and radial diffusivity (P = 0.004 and 0.02), grey matter volume (p = 0.05), white matter volume (p = 0.03) and normalized brain volume (P = 0.01). The linear regression model containing mean diffusivity corticospinal tracts and controlled for gait assistance was the best fit model (p = 0.004). Our results suggest an association between diffusion tensor imaging metrics and gait impairment, evidenced by brain mean diffusivity corticospinal tracts and timed 25-foot walk.

  3. Optimizing Ship Speed to Minimize Total Fuel Consumption with Multiple Time Windows

    Directory of Open Access Journals (Sweden)

    Jae-Gon Kim

    2016-01-01

    Full Text Available We study the ship speed optimization problem with the objective of minimizing the total fuel consumption. We consider multiple time windows for each port call as constraints and formulate the problem as a nonlinear mixed integer program. We derive intrinsic properties of the problem and develop an exact algorithm based on the properties. Computational experiments show that the suggested algorithm is very efficient in finding an optimal solution.

  4. A Real-Time PCR Detection of Genus Salmonella in Meat and Milk Samples

    Directory of Open Access Journals (Sweden)

    Jaroslav Pochop

    2013-05-01

    Full Text Available The aim of this study was follow the contamination of ready to eat milk and meat products with Salmonella spp. by using the Step One real-time PCR. Classical microbiological methods for detection of food-borne bacteria involve the use of pre-enrichment and/or specific enrichment, followed by the isolation of the bacteria in solid media and a final confirmation by biochemical and/or serological tests. We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and SensiFAST SYBR Hi-ROX Kit for the real-time PCR performance. In the investigated samples without incubation we could detect strain of Salmonella sp. in five out of twenty three samples (swabs. This Step One real-time PCR assay is extremely useful for any laboratory in possession of a real-time PCR. It is a fast, reproducible, simple, specific and sensitive way to detect nucleic acids, which could be used in clinical diagnostic tests in the future. Our results indicated that the Step One real-time PCR assay developed in this study could sensitively detect Salmonella spp. in ready to eat food.

  5. CyLineUp: A Cytoscape app for visualizing data in network small multiples.

    Science.gov (United States)

    Costa, Maria Cecília D; Slijkhuis, Thijs; Ligterink, Wilco; Hilhorst, Henk W M; de Ridder, Dick; Nijveen, Harm

    2016-01-01

    CyLineUp is a Cytoscape 3 app for the projection of high-throughput measurement data from multiple experiments/samples on a network or pathway map using "small multiples". This visualization method allows for easy comparison of different experiments in the context of the network or pathway. The user can import various kinds of measurement data and select any appropriate Cytoscape network or WikiPathways pathway map. CyLineUp creates small multiples by replicating the loaded network as many times as there are experiments/samples (e.g. time points, stress conditions, tissues, etc.). The measurement data for each experiment are then mapped onto the nodes (genes, proteins etc.) of the corresponding network using a color gradient. Each step of creating the visualization can be customized to the user's needs. The results can be exported as a high quality vector image.

  6. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    Science.gov (United States)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  7. Sampling Methodologies for Epidemiologic Surveillance of Men Who Have Sex with Men and Transgender Women in Latin America: An Empiric Comparison of Convenience Sampling, Time Space Sampling, and Respondent Driven Sampling

    OpenAIRE

    Clark, J. L.; Konda, K. A.; Silva-Santisteban, A.; Peinado, J.; Lama, J. R.; Kusunoki, L.; Perez-Brumer, A.; Pun, M.; Cabello, R.; Sebastian, J. L.; Suarez-Ognio, L.; Sanchez, J.

    2014-01-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June-August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through T...

  8. Multiple time scales of adaptation in auditory cortex neurons.

    Science.gov (United States)

    Ulanovsky, Nachum; Las, Liora; Farkas, Dina; Nelken, Israel

    2004-11-17

    Neurons in primary auditory cortex (A1) of cats show strong stimulus-specific adaptation (SSA). In probabilistic settings, in which one stimulus is common and another is rare, responses to common sounds adapt more strongly than responses to rare sounds. This SSA could be a correlate of auditory sensory memory at the level of single A1 neurons. Here we studied adaptation in A1 neurons, using three different probabilistic designs. We showed that SSA has several time scales concurrently, spanning many orders of magnitude, from hundreds of milliseconds to tens of seconds. Similar time scales are known for the auditory memory span of humans, as measured both psychophysically and using evoked potentials. A simple model, with linear dependence on both short-term and long-term stimulus history, provided a good fit to A1 responses. Auditory thalamus neurons did not show SSA, and their responses were poorly fitted by the same model. In addition, SSA increased the proportion of failures in the responses of A1 neurons to the adapting stimulus. Finally, SSA caused a bias in the neuronal responses to unbiased stimuli, enhancing the responses to eccentric stimuli. Therefore, we propose that a major function of SSA in A1 neurons is to encode auditory sensory memory on multiple time scales. This SSA might play a role in stream segregation and in binding of auditory objects over many time scales, a property that is crucial for processing of natural auditory scenes in cats and of speech and music in humans.

  9. Optoelectronic time-domain characterization of a 100 GHz sampling oscilloscope

    International Nuclear Information System (INIS)

    Füser, H; Baaske, K; Kuhlmann, K; Judaschke, R; Pierz, K; Bieler, M; Eichstädt, S; Elster, C

    2012-01-01

    We have carried out an optoelectronic measurement of the impulse response of an ultrafast sampling oscilloscope with a nominal bandwidth of 100 GHz within a time window of approximately 100 ps. Our experimental technique also considers frequency components above the cut-off frequency of higher order modes of the 1.0 mm coaxial line, which is shown to be important for the specification of the impulse response of ultrafast sampling oscilloscopes. Additionally, we have measured the reflection coefficient of the sampling head induced by the mismatch of the sampling circuit and the coaxial connector which is larger than 0.5 for certain frequencies. The uncertainty analysis has been performed using the Monte Carlo method of Supplement 1 to the 'Guide to the Expression of Uncertainty in Measurement' and correlations in the estimated impulse response have been determined. Our measurements extend previous work which deals with the characterization of 70 GHz oscilloscopes and the measurement of 100 GHz oscilloscopes up to the cut-off frequency of higher order modes

  10. Digital Droplet Multiple Displacement Amplification (ddMDA for Whole Genome Sequencing of Limited DNA Samples.

    Directory of Open Access Journals (Sweden)

    Minsoung Rhee

    Full Text Available Multiple displacement amplification (MDA is a widely used technique for amplification of DNA from samples containing limited amounts of DNA (e.g., uncultivable microbes or clinical samples before whole genome sequencing. Despite its advantages of high yield and fidelity, it suffers from high amplification bias and non-specific amplification when amplifying sub-nanogram of template DNA. Here, we present a microfluidic digital droplet MDA (ddMDA technique where partitioning of the template DNA into thousands of sub-nanoliter droplets, each containing a small number of DNA fragments, greatly reduces the competition among DNA fragments for primers and polymerase thereby greatly reducing amplification bias. Consequently, the ddMDA approach enabled a more uniform coverage of amplification over the entire length of the genome, with significantly lower bias and non-specific amplification than conventional MDA. For a sample containing 0.1 pg/μL of E. coli DNA (equivalent of ~3/1000 of an E. coli genome per droplet, ddMDA achieves a 65-fold increase in coverage in de novo assembly, and more than 20-fold increase in specificity (percentage of reads mapping to E. coli compared to the conventional tube MDA. ddMDA offers a powerful method useful for many applications including medical diagnostics, forensics, and environmental microbiology.

  11. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  12. Inverse chaos synchronization in linearly and nonlinearly coupled systems with multiple time-delays

    International Nuclear Information System (INIS)

    Shahverdiev, E.M.; Hashimov, R.H.; Nuriev, R.A.; Hashimova, L.H.; Huseynova, E.M.; Shore, K.A.

    2005-04-01

    We report on inverse chaos synchronization between two unidirectionally linearly and nonlinearly coupled chaotic systems with multiple time-delays and find the existence and stability conditions for different synchronization regimes. We also study the effect of parameter mismatches on synchonization regimes. The method is tested on the famous Ikeda model. Numerical simulations fully support the analytical approach. (author)

  13. Time-of-flight studies of multiple Bragg reflections in cylindrically bent perfect crystals

    Czech Academy of Sciences Publication Activity Database

    Mikula, Pavol; Furusaka, M.; Ohkubob, K.; Šaroun, Jan

    2012-01-01

    Roč. 45, č. 12 (2012), s. 1248-1253 ISSN 0021-8898 R&D Projects: GA AV ČR KJB100480901; GA ČR GAP204/10/0654 Institutional support: RVO:61389005 Keywords : neutron diffraction * time-of-flight method * multiple reflections * bent perfect crystals Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 3.343, year: 2012

  14. Wuchereria bancrofti in Tanzania: microfilarial periodicity and effect of blood sampling time on microfilarial intensities

    DEFF Research Database (Denmark)

    Simonsen, Poul Erik; Niemann, L.; Meyrowitsch, Dan Wolf

    1997-01-01

    The circadian periodicity of Wuchereria bancrofti microfilarial (mf) intensities in peripheral blood was analysed in a group of infected individuals from an endemic community in north-eastern Tanzania. The mf density was quantified at two-hourly intervals for 24 hours. A clear nocturnal periodic...... of blood sampling before peak time is discussed, and the importance of taking sampling time into consideration when analysing data from epidemiological studies is emphasized. A simple method is devised which can be used to adjust for the influence of time on mf intensities, in studies where accurate...... information on mf intensities is necessary, and where it is impossible to obtain all samples at peak time....

  15. The multiple imputation method: a case study involving secondary data analysis.

    Science.gov (United States)

    Walani, Salimah R; Cleland, Charles M

    2015-05-01

    To illustrate with the example of a secondary data analysis study the use of the multiple imputation method to replace missing data. Most large public datasets have missing data, which need to be handled by researchers conducting secondary data analysis studies. Multiple imputation is a technique widely used to replace missing values while preserving the sample size and sampling variability of the data. The 2004 National Sample Survey of Registered Nurses. The authors created a model to impute missing values using the chained equation method. They used imputation diagnostics procedures and conducted regression analysis of imputed data to determine the differences between the log hourly wages of internationally educated and US-educated registered nurses. The authors used multiple imputation procedures to replace missing values in a large dataset with 29,059 observations. Five multiple imputed datasets were created. Imputation diagnostics using time series and density plots showed that imputation was successful. The authors also present an example of the use of multiple imputed datasets to conduct regression analysis to answer a substantive research question. Multiple imputation is a powerful technique for imputing missing values in large datasets while preserving the sample size and variance of the data. Even though the chained equation method involves complex statistical computations, recent innovations in software and computation have made it possible for researchers to conduct this technique on large datasets. The authors recommend nurse researchers use multiple imputation methods for handling missing data to improve the statistical power and external validity of their studies.

  16. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  17. Multiple stage MS in analysis of plasma, serum, urine and in vitro samples relevant to clinical and forensic toxicology.

    Science.gov (United States)

    Meyer, Golo M; Maurer, Hans H; Meyer, Markus R

    2016-01-01

    This paper reviews MS approaches applied to metabolism studies, structure elucidation and qualitative or quantitative screening of drugs (of abuse) and/or their metabolites. Applications in clinical and forensic toxicology were included using blood plasma or serum, urine, in vitro samples, liquids, solids or plant material. Techniques covered are liquid chromatography coupled to low-resolution and high-resolution multiple stage mass analyzers. Only PubMed listed studies published in English between January 2008 and January 2015 were considered. Approaches are discussed focusing on sample preparation and mass spectral settings. Comments on advantages and limitations of these techniques complete the review.

  18. Isobar Separation in a Multiple-Reflection Time-of-Flight Mass Spectrometer by Mass-Selective Re-Trapping

    Science.gov (United States)

    Dickel, Timo; Plaß, Wolfgang R.; Lippert, Wayne; Lang, Johannes; Yavor, Mikhail I.; Geissel, Hans; Scheidenberger, Christoph

    2017-06-01

    A novel method for (ultra-)high-resolution spatial mass separation in time-of-flight mass spectrometers is presented. Ions are injected into a time-of-flight analyzer from a radio frequency (rf) trap, dispersed in time-of-flight according to their mass-to-charge ratios and then re-trapped dynamically in the same rf trap. This re-trapping technique is highly mass-selective and after sufficiently long flight times can provide even isobaric separation. A theoretical treatment of the method is presented and the conditions for optimum performance of the method are derived. The method has been implemented in a multiple-reflection time-of-flight mass spectrometer and mass separation powers (FWHM) in excess of 70,000, and re-trapping efficiencies of up to 35% have been obtained for the protonated molecular ion of caffeine. The isobars glutamine and lysine (relative mass difference of 1/4000) have been separated after a flight time of 0.2 ms only. Higher mass separation powers can be achieved using longer flight times. The method will have important applications, including isobar separation in nuclear physics and (ultra-)high-resolution precursor ion selection in multiple-stage tandem mass spectrometry. [Figure not available: see fulltext.

  19. Monte Carlo simulations of multiple scattering effects in ERD measurements

    International Nuclear Information System (INIS)

    Doyle, Barney Lee; Arstila, Kai.; Nordlumd, K.; Knapp, James Arthur

    2003-01-01

    Multiple scattering effects in ERD measurements are studied by comparing two Monte Carlo simulation codes, representing different approaches to obtain acceptable statistics, to experimental spectra measured from a HfO 2 sample with a time-of-flight-ERD setup. The results show that both codes can reproduce the absolute detection yields and the energy distributions in an adequate way. The effect of the choice of the interatomic potential in multiple scattering effects is also studied. Finally the capabilities of the MC simulations in the design of new measurement setups are demonstrated by simulating the recoil energy spectra from a WC x N y sample with a low energy heavy ion beam.

  20. Proceedings of the workshop on multiple prompt gamma-ray analysis

    International Nuclear Information System (INIS)

    Ebihara, Mitsuru; Hatsukawa, Yuichi; Oshima, Masumi

    2006-10-01

    The workshop on 'Multiple Prompt Gamma-ray Analysis' was held on March 8, 2006 at Tokai. It is based on a project, 'Developments of real time, non-destructive ultra sensitive elemental analysis using multiple gamma-ray detections and prompt gamma ray analysis and its application to real samples', one of the High priority Cooperative Research Programs performed by Japan Atomic Energy Agency and the University of Tokyo. In this workshop, the latest results of the Multiple Prompt Gamma ray Analysis (MPGA) study were presented, together with those of Neutron Activation Analysis with Multiple Gamma-ray Detection (NAAMG). The 9 of the presented papers are indexed individually. (J.P.N.)

  1. Distributed Fusion Estimation for Multisensor Multirate Systems with Stochastic Observation Multiplicative Noises

    Directory of Open Access Journals (Sweden)

    Peng Fangfang

    2014-01-01

    Full Text Available This paper studies the fusion estimation problem of a class of multisensor multirate systems with observation multiplicative noises. The dynamic system is sampled uniformly. Sampling period of each sensor is uniform and the integer multiple of the state update period. Moreover, different sensors have the different sampling rates and observations of sensors are subject to the stochastic uncertainties of multiplicative noises. At first, local filters at the observation sampling points are obtained based on the observations of each sensor. Further, local estimators at the state update points are obtained by predictions of local filters at the observation sampling points. They have the reduced computational cost and a good real-time property. Then, the cross-covariance matrices between any two local estimators are derived at the state update points. At last, using the matrix weighted optimal fusion estimation algorithm in the linear minimum variance sense, the distributed optimal fusion estimator is obtained based on the local estimators and the cross-covariance matrices. An example shows the effectiveness of the proposed algorithms.

  2. Multiple beam mask writers: an industry solution to the write time crisis

    Science.gov (United States)

    Litt, Lloyd C.

    2010-09-01

    The semiconductor industry is under constant pressure to reduce production costs even as technology complexity increases. Lithography represents the most expensive process due to its high capital equipment costs and the implementation of low-k1 lithographic processes, which has added to the complexity of making masks through the greater use of optical proximity correction, pixelated masks, and double or triple patterning. Each of these mask technologies allows the production of semiconductors at future nodes while extending the utility of current immersion tools. Low k1 patterning complexity combined with increased data due to smaller feature sizes is driving extremely long mask write times. While a majority of the industry is willing to accept mask write times of up to 24 hours, evidence suggests that the write times for many masks at the 22 nm node and beyond will be significantly longer. It has been estimated that $50M+ in non-recurring engineering (NRE) costs will be required to develop a multiple beam mask writer system, yet the business case to recover this kind of investment is not strong. Moreover, funding such a development is a high risk for an individual supplier. The problem is compounded by a disconnect between the tool customer (the mask supplier) and the final mask customer that will bear the increased costs if a high speed writer is not available. Since no individual company will likely risk entering this market, some type of industry-wide funding model will be needed. Because SEMATECH's member companies strongly support a multiple beam technology for mask writers to reduce the write time and cost of 193 nm and EUV masks, SEMATECH plans to pursue an advanced mask writer program in 2011 and 2012. In 2010, efforts will focus on identifying a funding model to address the investment to develop such a technology.

  3. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China

    International Nuclear Information System (INIS)

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-01-01

    Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to

  4. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Lilai, E-mail: llxu@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Gao, Peiqing, E-mail: peiqing15@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China); Cui, Shenghui, E-mail: shcui@iue.ac.cn [Key Lab of Urban Environment and Health, Institute of Urban Environment, Chinese Academy of Sciences, 1799 Jimei Road, Xiamen 361021 (China); Xiamen Key Lab of Urban Metabolism, Xiamen 361021 (China); Liu, Chun, E-mail: xmhwlc@yahoo.com.cn [Xiamen City Appearance and Environmental Sanitation Management Office, 51 Hexiangxi Road, Xiamen 361004 (China)

    2013-06-15

    Highlights: ► We propose a hybrid model that combines seasonal SARIMA model and grey system theory. ► The model is robust at multiple time scales with the anticipated accuracy. ► At month-scale, the SARIMA model shows good representation for monthly MSW generation. ► At medium-term time scale, grey relational analysis could yield the MSW generation. ► At long-term time scale, GM (1, 1) provides a basic scenario of MSW generation. - Abstract: Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 – 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 – 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to

  5. Defining an optimum pumping-time requirement for sampling ground-water wells on the Hanford site

    International Nuclear Information System (INIS)

    Scharnhorst, N.L.

    1982-04-01

    The objective was to determine the optimum time period necessary to pump water from a well before a representative sample of the ground water can be obtained. It was assumed that a representative sample has been collected if the concentration of chemical parameters is the same in a number of samples taken consecutively, so that the concentration of parameters does not vary with time of collection. Ground-water samples used in this project were obtained by pumping selected wells on the Hanford Site. At each well, samples were taken at two minute intervals, and on each sample various chemical analyses were performed. Samples were checked for pH, sulfate, iron, specific conductivity, chloride, nitrate and alkalinity. The data showed that pH, alkalinity, sulfate and specific conductivity levels stabilized almost immediately after pumping of the well began. In many wells, the chloride and nitrate levels were unstable throughout the 38-minute sampling period. Iron levels, however, did not behave in either fashion. The concentration of iron in the samples was high when pumping began but dropped rapidly as pumping continued. The best explanation for this is that iron is flushed from the sides of the casing into the well when pumping begins. After several minutes of pumping, most of the dissolved iron is washed from the well casing and the iron concentration reaches a stable plateau representative of the iron concentration in the ground water.Since iron concentration takes longest to stabilize, the optimum pumping time for a well is based on the iron stabilization time for that well

  6. Tracking of multiple objects with time-adjustable composite correlation filters

    Science.gov (United States)

    Ruchay, Alexey; Kober, Vitaly; Chernoskulov, Ilya

    2017-09-01

    An algorithm for tracking of multiple objects in video based on time-adjustable adaptive composite correlation filtering is proposed. For each frame a bank of composite correlation filters are designed in such a manner to provide invariance to pose, occlusion, clutter, and illumination changes. The filters are synthesized with the help of an iterative algorithm, which optimizes the discrimination capability for each object. The filters are adapted to the objects changes online using information from the current and past scene frames. Results obtained with the proposed algorithm using real-life scenes are presented and compared with those obtained with state-of-the-art tracking methods in terms of detection efficiency, tracking accuracy, and speed of processing.

  7. Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series

    DEFF Research Database (Denmark)

    Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier

    2016-01-01

    In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...

  8. Archival Bone Marrow Samples: Suitable for Multiple Biomarker Analysis?

    DEFF Research Database (Denmark)

    Lund, Bendik; Najmi, A. Laeya; Wesolowska, Agata

    2015-01-01

    biopsies from 18 Danish and Norwegian childhood acute lymphoblastic leukemia patients were included and compared with corresponding blood samples. Samples were grouped according to the age of sample and whether WGA was performed or not. We found that measurements of DNA concentration after DNA extraction...

  9. Microassay for interferon, using [3H]uridine, microculture plates, and a multiple automated sample harvester.

    Science.gov (United States)

    Richmond, J Y; Polatnick, J; Knudsen, R C

    1980-01-01

    A microassay for interferon is described which uses target cells grown in microculture wells, [3H]uridine to measure vesicular stomatitis virus replication in target cells, and a multiple automated sample harvester to collect the radioactively labeled viral ribonucleic acid onto glass fiber filter disks. The disks were placed in minivials, and radioactivity was counted in a liquid scintillation spectrophotometer. Interferon activity was calculated as the reciprocal of the highest titer which inhibited the incorporation of [3H]uridine into viral ribonucleic acid by 50%. Interferon titers determined by the microassay were similar to the plaque reduction assay when 100 plaque-forming units of challenge vesicular stomatitis virus was used. However, it was found that the interferon titers decreased approximately 2-fold for each 10-fold increase in the concentration of challenge vesicular stomatitis virus when tested in the range of 10(2) to 10(5) plaque-forming units. Interferon titers determined by the microassay show a high degree of repeatability, and the assay can be used to measure small and large numbers of interferon samples. PMID:6155105

  10. Evaluation of multiple trauma victims with 16-row multidetector CT (MDCT): a time analysis

    International Nuclear Information System (INIS)

    Heyer, C.M.; Nicolas, V.

    2005-01-01

    Purpose: Description and time analysis of a 16-row MDCT protocol in the evaluation of multiple trauma patients considering transport, time of scanning, patient positioning, image reconstruction, and image interpretation. Materials and methods: Between May and December 2004, 60 multiple trauma patients underwent 16-row MDCT (Sensation, Siemens, Erlangen, Germany). The protocol included serial scanning of the head, spiral scanning of the cervical spine and contrast-enhanced spiral scanning of the thorax/abdomen with multiplanar reformations (MPR) of the thoracic/lumbar spine and the pelvis. All time intervals including transport, patient positioning, scanning, duration of MPR, total time in the examination room, and time to first and final image interpretation were prospectively evaluated. Furthermore, patient characteristics, trauma profiles, and mortality rates were recorded. Results: 46 male and 14 female patients (mean age 43.6 years) were enrolled in the study. Time analysis of 16-row MDCT revealed the following results (mean time standard deviation): Emergency room treatment and transport 19.2±6.7 min, patient positioning 16.5±6.5 min, scan duration 8.0±3.3 min, total time in examination room 24.5±7.2 min, image reconstruction including MPR 32.0±16.4 min, and time of first (16.4±4.7 min) and final image interpretation (82.5±30.4 min). Trauma profiles revealed thoracic injuries in 35/60 patients (58.3%), head injuries in 23/60 patients (38.3%), abdominal injuries in 15/60 patients (25.0%), injuries of the cervical (9/60 patients, 15.0%), thoracic (12/60 patients, 20.0%), and lumbar spine (19/60 patients, 31.7%), pelvic injuries in 13/60 patients (21.7%), and injuries of extremities in 39/60 patients (65.0%). The mortality rate was 21.7%. (orig.)

  11. Numeracy of multiple sclerosis patients: A comparison of patients from the PERCEPT study to a German probabilistic sample.

    Science.gov (United States)

    Gaissmaier, Wolfgang; Giese, Helge; Galesic, Mirta; Garcia-Retamero, Rocio; Kasper, Juergen; Kleiter, Ingo; Meuth, Sven G; Köpke, Sascha; Heesen, Christoph

    2018-01-01

    A shared decision-making approach is suggested for multiple sclerosis (MS) patients. To properly evaluate benefits and risks of different treatment options accordingly, MS patients require sufficient numeracy - the ability to understand quantitative information. It is unknown whether MS affects numeracy. Therefore, we investigated whether patients' numeracy was impaired compared to a probabilistic national sample. As part of the larger prospective, observational, multicenter study PERCEPT, we assessed numeracy for a clinical study sample of German MS patients (N=725) with a standard test and compared them to a German probabilistic sample (N=1001), controlling for age, sex, and education. Within patients, we assessed whether disease variables (disease duration, disability, annual relapse rate, cognitive impairment) predicted numeracy beyond these demographics. MS patients showed a comparable level of numeracy as the probabilistic national sample (68.9% vs. 68.5% correct answers, P=0.831). In both samples, numeracy was higher for men and the highly educated. Disease variables did not predict numeracy beyond demographics within patients, and predictability was generally low. This sample of MS patients understood quantitative information on the same level as the general population. There is no reason to withhold quantitative information from MS patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Real-time recursive hyperspectral sample and band processing algorithm architecture and implementation

    CERN Document Server

    Chang, Chein-I

    2017-01-01

    This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.

  13. The U-tube sampling methodology and real-time analysis of geofluids

    International Nuclear Information System (INIS)

    Freifeld, Barry; Perkins, Ernie; Underschultz, James; Boreham, Chris

    2009-01-01

    The U-tube geochemical sampling methodology, an extension of the porous cup technique proposed by Wood (1973), provides minimally contaminated aliquots of multiphase fluids from deep reservoirs and allows for accurate determination of dissolved gas composition. The initial deployment of the U-tube during the Frio Brine Pilot CO 2 storage experiment, Liberty County, Texas, obtained representative samples of brine and supercritical CO 2 from a depth of 1.5 km. A quadrupole mass spectrometer provided real-time analysis of dissolved gas composition. Since the initial demonstration, the U-tube has been deployed for (1) sampling of fluids down gradient of the proposed Yucca Mountain High-Level Waste Repository, Armagosa Valley, Nevada (2) acquiring fluid samples beneath permafrost in Nunuvut Territory, Canada, and (3) at a CO 2 storage demonstration project within a depleted gas reservoir, Otway Basin, Victoria, Australia. The addition of in-line high-pressure pH and EC sensors allows for continuous monitoring of fluid during sample collection. Difficulties have arisen during U-tube sampling, such as blockage of sample lines from naturally occurring waxes or from freezing conditions; however, workarounds such as solvent flushing or heating have been used to address these problems. The U-tube methodology has proven to be robust, and with careful consideration of the constraints and limitations, can provide high quality geochemical samples.

  14. The Antimicrobial Peptide Lysozyme Is Induced after Multiple Trauma

    Directory of Open Access Journals (Sweden)

    Tim Klüter

    2014-01-01

    Full Text Available The antimicrobial peptide lysozyme is an important factor of innate immunity and exerts high potential of antibacterial activity. In the present study we evaluated the lysozyme expression in serum of multiple injured patients and subsequently analyzed their possible sources and signaling pathways. Expression of lysozyme was examined in blood samples of multiple trauma patients from the day of trauma until 14 days after trauma by ELISA. To investigate major sources of lysozyme, its expression and regulation in serum samples, different blood cells, and tissue samples were analysed by ELISA and real-time PCR. Neutrophils and hepatocytes were stimulated with cytokines and supernatant of Staphylococcus aureus. The present study demonstrates the induction and release of lysozyme in serum of multiple injured patients. The highest lysozyme expression of all tested cells and tissues was detected in neutrophils. Stimulation with trauma-related factors such as interleukin-6 and S. aureus induced lysozyme expression. Liver tissue samples of patients without trauma show little lysozyme expression compared to neutrophils. After stimulation with bacterial fragments, lysozyme expression of hepatocytes is upregulated significantly. Toll-like receptor 2, a classic receptor of Gram-positive bacterial protein, was detected as a possible target for lysozyme induction.

  15. Statistical methods for the time-to-event analysis of individual participant data from multiple epidemiological studies

    DEFF Research Database (Denmark)

    Thompson, Simon; Kaptoge, Stephen; White, Ian

    2010-01-01

    Meta-analysis of individual participant time-to-event data from multiple prospective epidemiological studies enables detailed investigation of exposure-risk relationships, but involves a number of analytical challenges....

  16. Variability and reliability of POP concentrations in multiple breast milk samples collected from the same mothers.

    Science.gov (United States)

    Kakimoto, Risa; Ichiba, Masayoshi; Matsumoto, Akiko; Nakai, Kunihiko; Tatsuta, Nozomi; Iwai-Shimada, Miyuki; Ishiyama, Momoko; Ryuda, Noriko; Someya, Takashi; Tokumoto, Ieyasu; Ueno, Daisuke

    2018-01-13

    Risk assessment of infant using a realistic persistent organic pollutant (POP) exposure through breast milk is essential to devise future regulation of POPs. However, recent investigations have demonstrated that POP levels in breast milk collected from the same mother showed a wide range of variation daily and monthly. To estimate the appropriate sample size of breast milk from the same mother to obtain reliable POP concentrations, breast milk samples were collected from five mothers living in Japan from 2006 to 2012. Milk samples from each mother were collected 3 to 6 times a day through 3 to 7 days consecutively. Food samples as the duplicated method were collected from two mothers during the period of breast milk sample collection. Those were employed for POP (PCBs, DDTs, chlordanes, and HCB) analysis. PCB concentrations detected in breast milk samples showed a wide range of variation which was maximum 63 and 60% of relative standard deviation (RSD) in lipid and wet weight basis, respectively. The time course trend of those variations among the mothers did not show any typical pattern. A larger amount of PCB intake through food seemed to affect 10 h after those concentrations in breast milk in lipid weight basis. Intraclass correlation coefficient (ICC) analyses indicated that the appropriate sample size for good reproducibility of POP concentrations in breast milk required at least two samples for lipid and wet weight basis.

  17. Neural Computations in a Dynamical System with Multiple Time Scales.

    Science.gov (United States)

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.

  18. Multiple projection optical diffusion tomography with plane wave illumination

    International Nuclear Information System (INIS)

    Markel, Vadim A; Schotland, John C

    2005-01-01

    We describe a new data collection scheme for optical diffusion tomography in which plane wave illumination is combined with multiple projections in the slab imaging geometry. Multiple projection measurements are performed by rotating the slab around the sample. The advantage of the proposed method is that the measured data are more compatible with the dynamic range of most commonly used detectors. At the same time, multiple projections improve image quality by mutually interchanging the depth and transverse directions, and the scanned (detection) and integrated (illumination) surfaces. Inversion methods are derived for image reconstructions with extremely large data sets. Numerical simulations are performed for fixed and rotated slabs

  19. EZH2 and CD79B mutational status over time in B-cell non-Hodgkin lymphomas detected by high-throughput sequencing using minimal samples

    Science.gov (United States)

    Saieg, Mauro Ajaj; Geddie, William R; Boerner, Scott L; Bailey, Denis; Crump, Michael; da Cunha Santos, Gilda

    2013-01-01

    BACKGROUND: Numerous genomic abnormalities in B-cell non-Hodgkin lymphomas (NHLs) have been revealed by novel high-throughput technologies, including recurrent mutations in EZH2 (enhancer of zeste homolog 2) and CD79B (B cell antigen receptor complex-associated protein beta chain) genes. This study sought to determine the evolution of the mutational status of EZH2 and CD79B over time in different samples from the same patient in a cohort of B-cell NHLs, through use of a customized multiplex mutation assay. METHODS: DNA that was extracted from cytological material stored on FTA cards as well as from additional specimens, including archived frozen and formalin-fixed histological specimens, archived stained smears, and cytospin preparations, were submitted to a multiplex mutation assay specifically designed for the detection of point mutations involving EZH2 and CD79B, using MassARRAY spectrometry followed by Sanger sequencing. RESULTS: All 121 samples from 80 B-cell NHL cases were successfully analyzed. Mutations in EZH2 (Y646) and CD79B (Y196) were detected in 13.2% and 8% of the samples, respectively, almost exclusively in follicular lymphomas and diffuse large B-cell lymphomas. In one-third of the positive cases, a wild type was detected in a different sample from the same patient during follow-up. CONCLUSIONS: Testing multiple minimal tissue samples using a high-throughput multiplex platform exponentially increases tissue availability for molecular analysis and might facilitate future studies of tumor progression and the related molecular events. Mutational status of EZH2 and CD79B may vary in B-cell NHL samples over time and support the concept that individualized therapy should be based on molecular findings at the time of treatment, rather than on results obtained from previous specimens. Cancer (Cancer Cytopathol) 2013;121:377–386. © 2013 American Cancer Society. PMID:23361872

  20. Replicability of time-varying connectivity patterns in large resting state fMRI samples.

    Science.gov (United States)

    Abrol, Anees; Damaraju, Eswar; Miller, Robyn L; Stephen, Julia M; Claus, Eric D; Mayer, Andrew R; Calhoun, Vince D

    2017-12-01

    The past few years have seen an emergence of approaches that leverage temporal changes in whole-brain patterns of functional connectivity (the chronnectome). In this chronnectome study, we investigate the replicability of the human brain's inter-regional coupling dynamics during rest by evaluating two different dynamic functional network connectivity (dFNC) analysis frameworks using 7 500 functional magnetic resonance imaging (fMRI) datasets. To quantify the extent to which the emergent functional connectivity (FC) patterns are reproducible, we characterize the temporal dynamics by deriving several summary measures across multiple large, independent age-matched samples. Reproducibility was demonstrated through the existence of basic connectivity patterns (FC states) amidst an ensemble of inter-regional connections. Furthermore, application of the methods to conservatively configured (statistically stationary, linear and Gaussian) surrogate datasets revealed that some of the studied state summary measures were indeed statistically significant and also suggested that this class of null model did not explain the fMRI data fully. This extensive testing of reproducibility of similarity statistics also suggests that the estimated FC states are robust against variation in data quality, analysis, grouping, and decomposition methods. We conclude that future investigations probing the functional and neurophysiological relevance of time-varying connectivity assume critical importance. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Usability of a new multiple high-speed pulse time data registration, processing and real-time display system for pulse time interval analysis

    International Nuclear Information System (INIS)

    Yawata, Takashi; Sakaue, Hisanobu; Hashimoto, Tetsuo; Itou, Shigeki

    2006-01-01

    A new high-speed multiple pulse time data registration, processing and real-time display system for time interval analysis (TIA) was developed for counting either β-α or α-α correlated decay-events. The TIA method has been so far limited to selective extraction of successive α-α decay events within the milli-second time scale owing to the use of original electronic hardware. In the present pulse-processing system, three different high-speed α/β(γ) pulses could be fed quickly to original 32 bit PCI board (ZN-HTS2) within 1 μs. This original PCI board is consisting of a timing-control IC (HTS-A) and 28 bit counting IC (HTS-B). All channel and pulse time data were stored to FIFO RAM, followed to transfer into temporary CPU RAM (32 MB) by DMA. Both data registration (into main RAM (200 MB)) and calculation of pulse time intervals together with real-time TIA-distribution display simultaneously processed using two sophisticate softwares. The present system has proven to succeed for the real-time display of TIA distribution spectrum even when 1.6x10 5 cps pulses from pulse generator were given to the system. By using this new system combined with liquid scintillation counting (LSC) apparatus, both a natural micro-second order β-α correlated decay-events and a milli-second order α-α correlated decay-event could be selectively extracted from the mixture of natural radionuclides. (author)

  2. Development and evaluation of a multiple-plate fraction collector for sample processing: application to radioprofiling in drug metabolism studies.

    Science.gov (United States)

    Barros, Anthony; Ly, Van T; Chando, Theodore J; Ruan, Qian; Donenfeld, Scott L; Holub, David P; Christopher, Lisa J

    2011-04-05

    Microplate scintillation counters are utilized routinely in drug metabolism laboratories for the off-line radioanalysis of fractions collected during HPLC radioprofiling. In this process, the current fraction collection technology is limited by the number of plates that can be used per injection as well as the potential for sample loss due to dripping or spraying as the fraction collector head moves from well to well or between plates. More importantly, sample throughput is limited in the conventional process, since the collection plates must be manually exchanged after each injection. The Collect PAL, an innovative multiple-plate fraction collector, was developed to address these deficiencies and improve overall sample throughput. It employs a zero-loss design and has sub-ambient temperature control. Operation of the system is completely controlled with software and up to 24 (96- or 384-well) fraction collection plates can be loaded in a completely automated run. The system may also be configured for collection into various-sized tubes or vials. At flow rates of 0.5 or 1.0 mL/min and at collection times of 10 or 15s, the system precisely delivered 83-μL fractions (within 4.1% CV) and 250-μL fractions (within 1.4% CV), respectively, of three different mobile phases into 12 mm × 32 mm vials. Similarly, at a flow rate of 1 mL/min and 10s collection times, the system precisely dispensed mobile phase containing a [(14)C]-radiolabeled compound across an entire 96-well plate (% CV was within 5.3%). Triplicate analyses of metabolism test samples containing [(14)C]buspirone and its metabolites, derived from three different matrices (plasma, urine and bile), indicated that the Collect PAL produced radioprofiles that were reproducible and comparable to the current technology; the % CV for 9 selected peaks in the radioprofiles generated with the Collect PAL were within 9.3%. Radioprofiles generated by collecting into 96- and 384-well plates were qualitatively comparable

  3. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  4. Robust Fault Tolerant Control for a Class of Time-Delay Systems with Multiple Disturbances

    Directory of Open Access Journals (Sweden)

    Songyin Cao

    2013-01-01

    Full Text Available A robust fault tolerant control (FTC approach is addressed for a class of nonlinear systems with time delay, actuator faults, and multiple disturbances. The first part of the multiple disturbances is supposed to be an uncertain modeled disturbance and the second one represents a norm-bounded variable. First, a composite observer is designed to estimate the uncertain modeled disturbance and actuator fault simultaneously. Then, an FTC strategy consisting of disturbance observer based control (DOBC, fault accommodation, and a mixed H2/H∞ controller is constructed to reconfigure the considered systems with disturbance rejection and attenuation performance. Finally, simulations for a flight control system are given to show the efficiency of the proposed approach.

  5. Air exposure and sample storage time influence on hydrogen release from tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Moshkunov, K.A., E-mail: moshkunov@gmail.co [National Research Nuclear University ' MEPhI' , Kashirskoe sh. 31, 115409 Moscow (Russian Federation); Schmid, K.; Mayer, M. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Kurnaev, V.A.; Gasparyan, Yu.M. [National Research Nuclear University ' MEPhI' , Kashirskoe sh. 31, 115409 Moscow (Russian Federation)

    2010-09-30

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D{sub 2}O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of {approx}300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  6. Air exposure and sample storage time influence on hydrogen release from tungsten

    International Nuclear Information System (INIS)

    Moshkunov, K.A.; Schmid, K.; Mayer, M.; Kurnaev, V.A.; Gasparyan, Yu.M.

    2010-01-01

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D 2 O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of ∼300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  7. Air exposure and sample storage time influence on hydrogen release from tungsten

    Science.gov (United States)

    Moshkunov, K. A.; Schmid, K.; Mayer, M.; Kurnaev, V. A.; Gasparyan, Yu. M.

    2010-09-01

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D 2O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of ˜300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  8. Automated MALDI Matrix Coating System for Multiple Tissue Samples for Imaging Mass Spectrometry

    Science.gov (United States)

    Mounfield, William P.; Garrett, Timothy J.

    2012-03-01

    Uniform matrix deposition on tissue samples for matrix-assisted laser desorption/ionization (MALDI) is key for reproducible analyte ion signals. Current methods often result in nonhomogenous matrix deposition, and take time and effort to produce acceptable ion signals. Here we describe a fully-automated method for matrix deposition using an enclosed spray chamber and spray nozzle for matrix solution delivery. A commercial air-atomizing spray nozzle was modified and combined with solenoid controlled valves and a Programmable Logic Controller (PLC) to control and deliver the matrix solution. A spray chamber was employed to contain the nozzle, sample, and atomized matrix solution stream, and to prevent any interference from outside conditions as well as allow complete control of the sample environment. A gravity cup was filled with MALDI matrix solutions, including DHB in chloroform/methanol (50:50) at concentrations up to 60 mg/mL. Various samples (including rat brain tissue sections) were prepared using two deposition methods (spray chamber, inkjet). A linear ion trap equipped with an intermediate-pressure MALDI source was used for analyses. Optical microscopic examination showed a uniform coating of matrix crystals across the sample. Overall, the mass spectral images gathered from tissues coated using the spray chamber system were of better quality and more reproducible than from tissue specimens prepared by the inkjet deposition method.

  9. Measuring Sulfur Isotope Ratios from Solid Samples with the Sample Analysis at Mars Instrument and the Effects of Dead Time Corrections

    Science.gov (United States)

    Franz, H. B.; Mahaffy, P. R.; Kasprzak, W.; Lyness, E.; Raaen, E.

    2011-01-01

    The Sample Analysis at Mars (SAM) instrument suite comprises the largest science payload on the Mars Science Laboratory (MSL) "Curiosity" rover. SAM will perform chemical and isotopic analysis of volatile compounds from atmospheric and solid samples to address questions pertaining to habitability and geochemical processes on Mars. Sulfur is a key element of interest in this regard, as sulfur compounds have been detected on the Martian surface by both in situ and remote sensing techniques. Their chemical and isotopic composition can belp constrain environmental conditions and mechanisms at the time of formation. A previous study examined the capability of the SAM quadrupole mass spectrometer (QMS) to determine sulfur isotope ratios of SO2 gas from a statistical perspective. Here we discuss the development of a method for determining sulfur isotope ratios with the QMS by sampling SO2 generated from heating of solid sulfate samples in SAM's pyrolysis oven. This analysis, which was performed with the SAM breadboard system, also required development of a novel treatment of the QMS dead time to accommodate the characteristics of an aging detector.

  10. Influence of stellar multiplicity on planet formation. I. Evidence of suppressed planet formation due to stellar companions within 20 au and validation of four planets from the Kepler multiple planet candidates

    International Nuclear Information System (INIS)

    Wang, Ji; Fischer, Debra A.; Xie, Ji-Wei; Barclay, Thomas

    2014-01-01

    The planet occurrence rate for multiple stars is important in two aspects. First, almost half of stellar systems in the solar neighborhood are multiple systems. Second, the comparison of the planet occurrence rate for multiple stars to that for single stars sheds light on the influence of stellar multiplicity on planet formation and evolution. We developed a method of distinguishing planet occurrence rates for single and multiple stars. From a sample of 138 bright (K P < 13.5) Kepler multi-planet candidate systems, we compared the stellar multiplicity rate of these planet host stars to that of field stars. Using dynamical stability analyses and archival Doppler measurements, we find that the stellar multiplicity rate of planet host stars is significantly lower than field stars for semimajor axes less than 20 AU, suggesting that planet formation and evolution are suppressed by the presence of a close-in companion star at these separations. The influence of stellar multiplicity at larger separations is uncertain because of search incompleteness due to a limited Doppler observation time baseline and a lack of high-resolution imaging observation. We calculated the planet confidence for the sample of multi-planet candidates and find that the planet confidences for KOI 82.01, KOI 115.01, KOI 282.01, and KOI 1781.02 are higher than 99.7% and thus validate the planetary nature of these four planet candidates. This sample of bright Kepler multi-planet candidates with refined stellar and orbital parameters, planet confidence estimation, and nearby stellar companion identification offers a well-characterized sample for future theoretical and observational study.

  11. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    Science.gov (United States)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  12. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  13. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    Directory of Open Access Journals (Sweden)

    Runchun Mark Wang

    2015-05-01

    Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.

  14. The INEL beryllium multiplication experiment

    International Nuclear Information System (INIS)

    Smith, J.R.; King, J.J.

    1991-03-01

    The experiment to measure the multiplication of 14-MeV neutrons in bulk beryllium has been completed. The experiment consists of determining the ratio of 56 Mn activities induced in a large manganese bath by a central 14-MeV neutron source, with and without a beryllium sample surrounding the source. In the manganese bath method a neutron source is placed at the center of a totally-absorbing aqueous solution of MnSo 4 . The capture of neutrons by Mn produces a 56 Mn activity proportional to the emission rate of the source. As applied to the measurement of the multiplication of 14- MeV neutrons in bulk beryllium, the neutron source is a tritium target placed at the end of the drift tube of a small deuteron accelerator. Surrounding the source is a sample chamber. When the sample chamber is empty, the neutrons go directly to the surrounding MnSO 4 solution, and produce a 56 Mn activity proportional to the neutron emission rate. When the chamber contains a beryllium sample, the neutrons first enter the beryllium and multiply through the (n,2n) process. Neutrons escaping from the beryllium enter the bath and produce a 56 Mn activity proportional to the neutron emission rate multiplied by the effective value of the multiplication in bulk beryllium. The ratio of the activities with and without the sample present is proportional to the multiplication value. Detailed calculations of the multiplication and all the systematic effects were made with the Monte Carlo program MCNP, utilizing both the Young and Stewart and the ENDF/B-VI evaluations for beryllium. Both data sets produce multiplication values that are in excellent agreement with the measurements for both raw and corrected values of the multiplication. We conclude that there is not real discrepancy between experimental and calculated values for the multiplication of neutrons in bulk beryllium. 12 figs., 11 tabs., 18 refs

  15. Big Data, Small Sample.

    Science.gov (United States)

    Gerlovina, Inna; van der Laan, Mark J; Hubbard, Alan

    2017-05-20

    Multiple comparisons and small sample size, common characteristics of many types of "Big Data" including those that are produced by genomic studies, present specific challenges that affect reliability of inference. Use of multiple testing procedures necessitates calculation of very small tail probabilities of a test statistic distribution. Results based on large deviation theory provide a formal condition that is necessary to guarantee error rate control given practical sample sizes, linking the number of tests and the sample size; this condition, however, is rarely satisfied. Using methods that are based on Edgeworth expansions (relying especially on the work of Peter Hall), we explore the impact of departures of sampling distributions from typical assumptions on actual error rates. Our investigation illustrates how far the actual error rates can be from the declared nominal levels, suggesting potentially wide-spread problems with error rate control, specifically excessive false positives. This is an important factor that contributes to "reproducibility crisis". We also review some other commonly used methods (such as permutation and methods based on finite sampling inequalities) in their application to multiple testing/small sample data. We point out that Edgeworth expansions, providing higher order approximations to the sampling distribution, offer a promising direction for data analysis that could improve reliability of studies relying on large numbers of comparisons with modest sample sizes.

  16. Does developmental timing of exposure to child maltreatment predict memory performance in adulthood? Results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Busso, Daniel S; Raffeld, Miriam R; Smoller, Jordan W; Nelson, Charles A; Doyle, Alysa E; Luk, Gigi

    2016-01-01

    Although maltreatment is a known risk factor for multiple adverse outcomes across the lifespan, its effects on cognitive development, especially memory, are poorly understood. Using data from a large, nationally representative sample of young adults (Add Health), we examined the effects of physical and sexual abuse on working and short-term memory in adulthood. We examined the association between exposure to maltreatment as well as its timing of first onset after adjusting for covariates. Of our sample, 16.50% of respondents were exposed to physical abuse and 4.36% to sexual abuse by age 17. An analysis comparing unexposed respondents to those exposed to physical or sexual abuse did not yield any significant differences in adult memory performance. However, two developmental time periods emerged as important for shaping memory following exposure to sexual abuse, but in opposite ways. Relative to non-exposed respondents, those exposed to sexual abuse during early childhood (ages 3-5), had better number recall and those first exposed during adolescence (ages 14-17) had worse number recall. However, other variables, including socioeconomic status, played a larger role (than maltreatment) on working and short-term memory. We conclude that a simple examination of "exposed" versus "unexposed" respondents may obscure potentially important within-group differences that are revealed by examining the effects of age at onset to maltreatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Standardised Resting Time Prior to Blood Sampling and Diurnal Variation Associated with Risk of Patient Misclassification

    DEFF Research Database (Denmark)

    Bøgh Andersen, Ida; Brasen, Claus L.; Christensen, Henry

    2015-01-01

    .9×10-7) and sodium (p = 8.7×10-16). Only TSH and albumin were clinically significantly influenced by diurnal variation. Resting time had no clinically significant effect. CONCLUSIONS: We found no need for resting 15 minutes prior to blood sampling. However, diurnal variation was found to have a significant......BACKGROUND: According to current recommendations, blood samples should be taken in the morning after 15 minutes' resting time. Some components exhibit diurnal variation and in response to pressures to expand opening hours and reduce waiting time, the aims of this study were to investigate...... the impact of resting time prior to blood sampling and diurnal variation on biochemical components, including albumin, thyrotropin (TSH), total calcium and sodium in plasma. METHODS: All patients referred to an outpatient clinic for blood sampling were included in the period Nov 2011 until June 2014 (opening...

  18. Impact of collection container material and holding times on sample integrity for mercury and methylmercury in water

    Energy Technology Data Exchange (ETDEWEB)

    Riscassi, Ami L [ORNL; Miller, Carrie L [ORNL; Brooks, Scott C [ORNL

    2014-01-01

    Mercury (Hg) and methylmercury (MeHg) concentrations in streamwater can vary on short timescales (hourly or less) during storm flow and on a diel cycle; the frequency and timing of sampling required to accurately characterize these dynamics may be difficult to accomplish manually. Automated sampling can assist in sample collection; however use has been limited for Hg and MeHg analysis due to stability concerns of trace concentrations during extended storage times. We examined the viability of using automated samplers with disposable low-density polyethylene (LDPE) sample bags to collect industrially contaminated streamwater for unfiltered and filtered Hg and MeHg analysis. Specifically we investigated the effect of holding times ranging from hours to days on streamwater collected during baseflow and storm flow. Unfiltered and filtered Hg and MeHg concentrations decreased with increases in time prior to sample processing; holding times of 24 hours or less resulted in concentration changes (mean 11 7% different) similar to variability in duplicates collected manually during analogous field conditions (mean 7 10% different). Comparisons of samples collected with manual and automated techniques throughout a year for a wide range of stream conditions were also found to be similar to differences observed between duplicate grab samples. These results demonstrate automated sampling into LDPE bags with holding times of 24 hours or less can be effectively used to collect streamwater for Hg and MeHg analysis, and encourage the testing of these materials and methods for implementation in other aqueous systems where high-frequency sampling is warranted.

  19. Effects of Acupuncture on Gait of Patients with Multiple Sclerosis.

    Science.gov (United States)

    Criado, Maria Begoña; Santos, Maria João; Machado, Jorge; Gonçalves, Arminda Manuela; Greten, Henry Johannes

    2017-11-01

    Multiple sclerosis is considered a complex and heterogeneous disease. Approximately 85% of patients with multiple sclerosis indicate impaired gait as one of the major limitations in their daily life. Acupuncture studies found a reduction of spasticity and improvement of fatigue and imbalance in patients with multiple sclerosis, but there is a lack of studies regarding gait. We designed a study of acupuncture treatment, according to the Heidelberg model of Traditional Chinese Medicine (TCM), to investigate if acupuncture can be a useful therapeutic strategy in patients with gait impairment in multiple sclerosis of relapsing-remitting type. The sample consisted of 20 individuals with diagnosis of multiple sclerosis of relapsing-remitting type. Gait impairment was evaluated by the 25-foot walk test. The results showed differences in time to walk 25 feet following true acupuncture. In contrast, there was no difference in time to walk 25 feet following sham acupuncture. When using true acupuncture, 95% of cases showed an improvement in 25-foot walk test, compared with 45% when sham acupuncture was done. Our study protocol provides evidence that acupuncture treatment can be an attractive option for patients with multiple sclerosis, with gait impairment.

  20. Multiple time-reversed guide-sources in shallow water

    Science.gov (United States)

    Gaumond, Charles F.; Fromm, David M.; Lingevitch, Joseph F.; Gauss, Roger C.; Menis, Richard

    2003-10-01

    Detection in a monostatic, broadband, active sonar system in shallow water is degraded by propagation-induced spreading. The detection improvement from multiple spatially separated guide sources (GSs) is presented as a method to mitigate this degradation. The improvement of detection by using information in a set of one-way transmissions from a variety of positions is shown using sea data. The experimental area is south of the Hudson Canyon off the coast of New Jersey. The data were taken using five elements of a time-reversing VLA. The five elements were contiguous and at midwater depth. The target and guide source was an echo repeater positioned at various ranges and at middepth. The transmitted signals were 3.0- to 3.5-kHz LFMs. The data are analyzed to show the amount of information present in the collection, a baseline probability of detection (PD) not using the collection of GS signals, the improvement in PD from the use of various sets of GS signals. The dependence of the improvement as a function of range is also shown. [The authors acknowledge support from Dr. Jeffrey Simmen, ONR321OS, and the chief scientist Dr. Charles Holland. Work supported by ONR.

  1. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  2. Transition among synchronous schemes in coupled nonidentical multiple time delay systems

    International Nuclear Information System (INIS)

    Thang Manh Hoang

    2009-01-01

    We present the transition among possible synchronous schemes in coupled nonidentical multiple time delay systems, i.e., lag, projective-lag, complete, anticipating and projective-anticipating synchronization. The number of nonlinear transforms in the master's equation can be different from that in slave's, and nonlinear transforms can be in various forms. The driving signal is the sum of nonlinearly transformed components of delayed state variable. Moreover, the equation representing for driving signal is constructed exactly so that the difference between the master's and slave's structures is complemented. The sufficient condition for synchronization is considered by the Krasovskii-Lyapunov theory. The specific examples will demonstrate and verify the effectiveness of the proposed models.

  3. IAEA Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    Geist, William H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-15

    The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.

  4. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  5. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  6. Effective use of multibeam antenna and space-time multiple access technology in modern mobile communication systems

    OpenAIRE

    Moskalets, N. V.

    2015-01-01

    A possibility for efficient use of radio-frequency spectrum and of corresponding increase in productivity of mobile communication system with space-time multiple access obtained by use of multibeam antenna of base station is considered.

  7. Low-dose multiple-information retrieval algorithm for X-ray grating-based imaging

    International Nuclear Information System (INIS)

    Wang Zhentian; Huang Zhifeng; Chen Zhiqiang; Zhang Li; Jiang Xiaolei; Kang Kejun; Yin Hongxia; Wang Zhenchang; Stampanoni, Marco

    2011-01-01

    The present work proposes a low dose information retrieval algorithm for X-ray grating-based multiple-information imaging (GB-MII) method, which can retrieve the attenuation, refraction and scattering information of samples by only three images. This algorithm aims at reducing the exposure time and the doses delivered to the sample. The multiple-information retrieval problem in GB-MII is solved by transforming a nonlinear equations set to a linear equations and adopting the nature of the trigonometric functions. The proposed algorithm is validated by experiments both on conventional X-ray source and synchrotron X-ray source, and compared with the traditional multiple-image-based retrieval algorithm. The experimental results show that our algorithm is comparable with the traditional retrieval algorithm and especially suitable for high Signal-to-Noise system.

  8. Active neutron multiplicity analysis and Monte Carlo calculations

    International Nuclear Information System (INIS)

    Krick, M.S.; Ensslin, N.; Langner, D.G.; Miller, M.C.; Siebelist, R.; Stewart, J.E.; Ceo, R.N.; May, P.K.; Collins, L.L. Jr

    1994-01-01

    Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined

  9. Delay-Dependent Asymptotic Stability of Cohen-Grossberg Models with Multiple Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    Xiaofeng Liao

    2007-01-01

    Full Text Available Dynamical behavior of a class of Cohen-Grossberg models with multiple time-varying delays is studied in detail. Sufficient delay-dependent criteria to ensure local and global asymptotic stabilities of the equilibrium of this network are derived by constructing suitable Lyapunov functionals. The obtained conditions are shown to be less conservative and restrictive than those reported in the known literature. Some numerical examples are included to demonstrate our results.

  10. Sampling times influence the estimate of parameters in the Weibull dissolution model

    Czech Academy of Sciences Publication Activity Database

    Čupera, J.; Lánský, Petr; Šklubalová, Z.

    2015-01-01

    Roč. 78, Oct 12 (2015), s. 171-176 ISSN 0928-0987 Institutional support: RVO:67985823 Keywords : dissolution * Fisher information * rate constant * optimal sampling times Subject RIV: BA - General Mathematics Impact factor: 3.773, year: 2015

  11. Effect of 153Sm-EDTMP on survival time in patients with nasopharyngeal carcinoma and multiple bone metastases

    International Nuclear Information System (INIS)

    Fan Wei; Zheng Zongyuan; Xu Guangpu

    2004-01-01

    Objective: To evaluate the effect on survival of Samarium-153-ethylene diamine tetramethylene phosphonate (153Sm-EDTMP) in patients with nasopharyngeal carcinoma (NPC) and multiple bone metastases. Methods: From 1993 to 1999, 160 patients (127 men, 33 women; median age 35 years) presented with NPC and multiple bone metastases. Of these, 40 patients had undergone chemotherapy, and 72 palliative radiotherapy. Patients were randomly divided into four groups: Group 1 (N = 20) received analgesics (control); Groups 2, 3 and 4 (N = 80, 40, and 20, respectively) received one, two or three courses, respectively, of 153Sm-EDTMP (77.7 MBq/kg/course; course interval, 4 wk). Results: Eight patients died of non-cancer-related causes, and 24 were lost to follow-up. The median survival time for Group 1 (7.8 months) was significantly less (p < 0.05) than that of Groups 2, 3 and 4 (11.6, 13.4 and 12.8 months, respectively). Patients given 153Sm-EDTMP who had had revious external radiation survived longer (p < 0.05) than those in the other treatment groups. Conclusions: Internal radiotherapy with 153Sm-EDTMP can extend survival time in patients with nasopharyngeal carcinoma and multiple bone metastases; when combined with external radiotherapy in appropriate patients, its effect on survival time is enhanced.. (authors)

  12. Simulation of Cavity Flow by the Lattice Boltzmann Method using Multiple-Relaxation-Time scheme

    International Nuclear Information System (INIS)

    Ryu, Seung Yeob; Kang, Ha Nok; Seo, Jae Kwang; Yun, Ju Hyeon; Zee, Sung Quun

    2006-01-01

    Recently, the lattice Boltzmann method(LBM) has gained much attention for its ability to simulate fluid flows, and for its potential advantages over conventional CFD method. The key advantages of LBM are, (1) suitability for parallel computations, (2) absence of the need to solve the time-consuming Poisson equation for pressure, and (3) ease with multiphase flows, complex geometries and interfacial dynamics may be treated. The LBM using relaxation technique was introduced by Higuerea and Jimenez to overcome some drawbacks of lattice gas automata(LGA) such as large statistical noise, limited range of physical parameters, non- Galilean invariance, and implementation difficulty in three-dimensional problem. The simplest LBM is the lattice Bhatnager-Gross-Krook(LBGK) equation, which based on a single-relaxation-time(SRT) approximation. Due to its extreme simplicity, the lattice BGK(LBGK) equation has become the most popular lattice Boltzmann model in spite of its well-known deficiencies, for example, in simulating high-Reynolds numbers flow. The Multiple-Relaxation-Time(MRT) LBM was originally developed by D'Humieres. Lallemand and Luo suggests that the use of a Multiple-Relaxation-Time(MRT) models are much more stable than LBGK, because the different relaxation times can be individually tuned to achieve 'optimal' stability. A lid-driven cavity flow is selected as the test problem because it has geometrically singular points in the flow, but geometrically simple. Results are compared with those using SRT, MRT model in the LBGK method and previous simulation data using Navier-Stokes equations for the same flow conditions. In summary, LBM using MRT model introduces much less spatial oscillations near geometrical singular points, which is important for the successful simulation of higher Reynolds number flows

  13. Idealized flow patterns and transit times in gas/liquid contacting trays with multiple box downcomers

    International Nuclear Information System (INIS)

    D'Arcy, D.

    1977-08-01

    Trays with multiple box downcomers are often used in chemical process plants nowadays. In order to make a theoretical assessment of the mass transfer efficiency of such trays, knowledge is needed of the time spent by the liquid at various parts of the tray. An idealized but reasonable flow pattern has been assumed and the local velocities and transit times along ten equally-spaced stream lines have been computed. Numerical and graphical results are presented. (author)

  14. Development of a real time multiple target, multi camera tracker for civil security applications

    Science.gov (United States)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  15. Use of multiple age tracers to estimate groundwater residence times and long-term recharge rates in arid southern Oman

    Science.gov (United States)

    Müller, Th.; Osenbrück, K.; Strauch, G.; Pavetich, S.; Al-Mashaikhi, K.-S.; Herb, C.; Merchel, S.; Rugel, G.; Aeschbach, W.; Sanford, Ward E.

    2016-01-01

    Multiple age tracers were measured to estimate groundwater residence times in the regional aquifer system underlying southwestern Oman. This area, known as the Najd, is one of the most arid areas in the world and is planned to be the main agricultural center of the Sultanate of Oman in the near future. The three isotopic age tracers 4He, 14C and 36Cl were measured in waters collected from wells along a line that extended roughly from the Dhofar Mountains near the Arabian Sea northward 400 km into the Empty Quarter of the Arabian Peninsula. The wells sampled were mostly open to the Umm Er Radhuma confined aquifer, although, some were completed in the mostly unconfined Rus aquifer. The combined results from the three tracers indicate the age of the confined groundwater is  100 ka in the central section north of the mountains, and up to and > one Ma in the Empty Quarter. The 14C data were used to help calibrate the 4He and 36Cl data. Mixing models suggest that long open boreholes north of the mountains compromise 14C-only interpretations there, in contrast to 4He and 36Cl calculations that are less sensitive to borehole mixing. Thus, only the latter two tracers from these more distant wells were considered reliable. In addition to the age tracers, δ2H and δ18O data suggest that seasonal monsoon and infrequent tropical cyclones are both substantial contributors to the recharge. The study highlights the advantages of using multiple chemical and isotopic data when estimating groundwater travel times and recharge rates, and differentiating recharge mechanisms.

  16. Sample pooling for real-time PCR detection and virulence determination of the footrot pathogen Dichelobacter nodosus.

    Science.gov (United States)

    Frosth, Sara; König, Ulrika; Nyman, Ann-Kristin; Aspán, Anna

    2017-09-01

    Dichelobacter nodosus is the principal cause of ovine footrot and strain virulence is an important factor in disease severity. Therefore, detection and virulence determination of D. nodosus is important for proper diagnosis of the disease. Today this is possible by real-time PCR analysis. Analysis of large numbers of samples is costly and laborious; therefore, pooling of individual samples is common in surveillance programs. However, pooling can reduce the sensitivity of the method. The aim of this study was to develop a pooling method for real-time PCR analysis that would allow sensitive detection and simultaneous virulence determination of D. nodosus. A total of 225 sheep from 17 flocks were sampled using ESwabs within the Swedish Footrot Control Program in 2014. Samples were first analysed individually and then in pools of five by real-time PCR assays targeting the 16S rRNA and aprV2/B2 genes of D. nodosus. Each pool consisted of four negative and one positive D. nodosus samples with varying amounts of the bacterium. In the individual analysis, 61 (27.1%) samples were positive in the 16S rRNA and the aprV2/B2 PCR assays and 164 (72.9%) samples were negative. All samples positive in the aprV2/B2 PCR-assay were of aprB2 variant. The pooled analysis showed that all 41 pools were also positive for D. nodosus 16S rRNA and the aprB2 variant. The diagnostic sensitivity for pooled and individual samples was therefore similar. Our method includes concentration of the bacteria before DNA-extraction. This may account for the maintenance of diagnostic sensitivity. Diagnostic sensitivity in the real-time PCR assays of the pooled samples were comparable to the sensitivity obtained for individually analysed samples. Even sub-clinical infections were able to be detected in the pooled PCR samples which is important for control of the disease. This method may therefore be implemented in footrot control programs where it can replace analysis of individual samples.

  17. Utilizing the Zero-One Linear Programming Constraints to Draw Multiple Sets of Matched Samples from a Non-Treatment Population as Control Groups for the Quasi-Experimental Design

    Science.gov (United States)

    Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar

    2005-01-01

    The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…

  18. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  19. Adaptive multiple importance sampling for Gaussian processes

    Czech Academy of Sciences Publication Activity Database

    Xiong, X.; Šmídl, Václav; Filippone, M.

    2017-01-01

    Roč. 87, č. 8 (2017), s. 1644-1665 ISSN 0094-9655 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Gaussian Process * Bayesian estimation * Adaptive importance sampling Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/smidl-0469804.pdf

  20. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  1. A lab-on-a-chip system with integrated sample preparation and loop-mediated isothermal amplification for rapid and quantitative detection of Salmonella spp. in food samples

    DEFF Research Database (Denmark)

    Sun, Yi; Than Linh, Quyen; Hung, Tran Quang

    2015-01-01

    was capable to detect Salmonella at concentration of 50 cells per test within 40 min. The simple design, together with high level of integration, isothermal amplification, and quantitative analysis of multiple samples in short time will greatly enhance the practical applicability of the LOC system for rapid...... amplification (LAMP) for rapid and quantitative detection of Salmonella spp. in food samples. The whole diagnostic procedures including DNA isolation, isothermal amplification, and real-time detection were accomplished in a single chamber. Up to eight samples could be handled simultaneously and the system...... and usually take a few hours to days to complete. In response to the demand for rapid on line or at site detection of pathogens, in this study, we describe for the first time an eight-chamber lab-on-a-chip (LOC) system with integrated magnetic beads-based sample preparation and loop-mediated isothermal...

  2. Time-Efficiency of Sorting Chironomidae Surface-Floating Pupal Exuviae Samples from Urban Trout Streams in Northeast Minnesota, USA

    Directory of Open Access Journals (Sweden)

    Alyssa M Anderson

    2012-10-01

    Full Text Available Collections of Chironomidae surface-floating pupal exuviae (SFPE provide an effective means of assessing water quality in streams. Although not widely used in the United States, the technique is not new and has been shown to be more cost-efficient than traditional dip-net sampling techniques in organically enriched stream in an urban landscape. The intent of this research was to document the efficiency of sorting SFPE samples relative to dip-net samples in trout streams with catchments varying in amount of urbanization and differences in impervious surface. Samples of both SFPE and dip-nets were collected from 17 sample sites located on 12 trout streams in Duluth, MN, USA. We quantified time needed to sort subsamples of 100 macroinvertebrates from dip-net samples, and less than or greater than 100 chironomid exuviae from SFPE samples. For larger samples of SFPE, the time required to subsample up to 300 exuviae was also recorded. The average time to sort subsamples of 100 specimens was 22.5 minutes for SFPE samples, compared to 32.7 minutes for 100 macroinvertebrates in dip-net samples. Average time to sort up to 300 exuviae was 37.7 minutes. These results indicate that sorting SFPE samples is more time-efficient than traditional dip-net techniques in trout streams with varying catchment characteristics.doi: 10.5324/fn.v31i0.1380.Published online: 17 October 2012.

  3. Time-delay-induced dynamical behaviors for an ecological vegetation growth system driven by cross-correlated multiplicative and additive noises.

    Science.gov (United States)

    Wang, Kang-Kang; Ye, Hui; Wang, Ya-Jun; Li, Sheng-Hong

    2018-05-14

    In this paper, the modified potential function, the stationary probability distribution function (SPDF), the mean growth time and the mean degeneration time for a vegetation growth system with time delay are investigated, where the vegetation system is assumed to be disturbed by cross-correlated multiplicative and additive noises. The results reveal some fact that the multiplicative and additive noises can both reduce the stability and speed up the decline of the vegetation system, while the strength of the noise correlation and time delay can both enhance the stability of the vegetation and slow down the depression process of the ecological system. On the other hand, with regard to the impacts of noises and time delay on the mean development and degeneration processes of the ecological system, it is discovered that 1) in the development process of the vegetation population, the increase of the noise correlation strength and time delay will restrain the regime shift from the barren state to the boom one, while the increase of the additive noise can lead to the fast regime shift from the barren state to the boom one. 2) Conversely, in the depression process of the ecological system, the increase of the strength of the correlation noise and time delay will prevent the regime shift from the boom state to the barren one. Comparatively, the increase of the additive and multiplicative noises can accelerate the regime shift from the boom state to the barren state.

  4. Observation of diffusion phenomena of liquid phase with multiple components

    International Nuclear Information System (INIS)

    Eguchi, Wataru

    1979-01-01

    The diffusion phenomena of liquid phase with multiple components was directly observed, and the factors contributing to complex material transfer were investigated, comparing to the former experimental results. The most excellent method of observing the diffusion behavior of liquid phase used heretofore is to trace the time history of concentration distribution for each component in unsteady diffusion process. The method of directly observing the concentration distribution is usually classified into the analysis of diffused samples, the checking of radioactive isotope tracers, and the measurement of light refraction and transmission. The most suitable method among these is to trace this time history by utilizing the spectrophotometer of position scanning type. An improved spectrophotometer was manufactured for trial. The outline of the measuring system and the detail of the optical system of this new type spectrophotometer are explained. The resolving power for position measurement is described with the numerical calculation. As for the observation examples of the diffusion phenomena of liquid phase with multiple components, the diffusion of multiple electrolytes in aqueous solution, the observation of the material transfer phenomena accompanied by heterogeneous and single phase chemical reaction, and the observation of concentration distribution in the liquid diaphragm in a reaction absorption system are described. For each experimental item, the test apparatus, the sample material, the test process, the test results and the evaluation are explained in detail, and the diffusion phenomena of liquid phase with multiple components were pretty well elucidated. (Nakai, Y.)

  5. One-sample determination of glomerular filtration rate (GFR) in children. An evaluation based on 75 consecutive patients

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Lütken; Kanstrup, Inge-Lis; Henriksen, Jens Henrik Sahl

    2013-01-01

    the plasma radioactivity curve. The one-sample clearance was determined from a single plasma sample collected at 60, 90 or 120 min after injection according to the one-pool method. Results. The overall accuracy of one-sample clearance was excellent with mean numeric difference to the reference value of 0.......7-1.7 mL/min. In 64 children, the one-sample clearance was within ± 4 mL/min of the multiple-sample value. However, in 11 children the numeric difference exceeded 4 mL/min (4.4-19.5). Analysis of age, body size, distribution volume, indicator retention time, clearance level, curve fitting, and sampling...... fraction (15%) larger discrepancies are found. If an accurate clearance value is essential a multiple-sample determination should be performed....

  6. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  7. Multivariate survivorship analysis using two cross-sectional samples.

    Science.gov (United States)

    Hill, M E

    1999-11-01

    As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.

  8. Social attribution test--multiple choice (SAT-MC) in schizophrenia: comparison with community sample and relationship to neurocognitive, social cognitive and symptom measures.

    Science.gov (United States)

    Bell, Morris D; Fiszdon, Joanna M; Greig, Tamasine C; Wexler, Bruce E

    2010-09-01

    This is the first report on the use of the Social Attribution Task - Multiple Choice (SAT-MC) to assess social cognitive impairments in schizophrenia. The SAT-MC was originally developed for autism research, and consists of a 64-second animation showing geometric figures enacting a social drama, with 19 multiple choice questions about the interactions. Responses from 85 community-dwelling participants and 66 participants with SCID confirmed schizophrenia or schizoaffective disorders (Scz) revealed highly significant group differences. When the two samples were combined, SAT-MC scores were significantly correlated with other social cognitive measures, including measures of affect recognition, theory of mind, self-report of egocentricity and the Social Cognition Index from the MATRICS battery. Using a cut-off score, 53% of Scz were significantly impaired on SAT-MC compared with 9% of the community sample. Most Scz participants with impairment on SAT-MC also had impairment on affect recognition. Significant correlations were also found with neurocognitive measures but with less dependence on verbal processes than other social cognitive measures. Logistic regression using SAT-MC scores correctly classified 75% of both samples. Results suggest that this measure may have promise, but alternative versions will be needed before it can be used in pre-post or longitudinal designs. (c) 2009 Elsevier B.V. All rights reserved.

  9. Multiple single-element transducer photoacoustic computed tomography system

    Science.gov (United States)

    Kalva, Sandeep Kumar; Hui, Zhe Zhi; Pramanik, Manojit

    2018-02-01

    Light absorption by the chromophores (hemoglobin, melanin, water etc.) present in any biological tissue results in local temperature rise. This rise in temperature results in generation of pressure waves due to the thermoelastic expansion of the tissue. In a circular scanning photoacoustic computed tomography (PACT) system, these pressure waves can be detected using a single-element ultrasound transducer (SUST) (while rotating in full 360° around the sample) or using a circular array transducer. SUST takes several minutes to acquire the PA data around the sample whereas the circular array transducer takes only a fraction of seconds. Hence, for real time imaging circular array transducers are preferred. However, these circular array transducers are custom made, expensive and not easily available in the market whereas SUSTs are cheap and readily available in the market. Using SUST for PACT systems is still cost effective. In order to reduce the scanning time to few seconds instead of using single SUST (rotating 360° ), multiple SUSTs can be used at the same time to acquire the PA data. This will reduce the scanning time by two-fold in case of two SUSTs (rotating 180° ) or by four-fold and eight-fold in case of four SUSTs (rotating 90° ) and eight SUSTs (rotating 45° ) respectively. Here we show that with multiple SUSTs, similar PA images (numerical and experimental phantom data) can be obtained as that of PA images obtained using single SUST.

  10. A multiple-time-scale approach to the control of ITBs on JET

    International Nuclear Information System (INIS)

    Laborde, L.; Mazon, D.; Moreau, D.; Moreau, D.; Ariola, M.; Cordoliani, V.; Tala, T.

    2005-01-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  11. A multiple-time-scale approach to the control of ITBs on JET

    Energy Technology Data Exchange (ETDEWEB)

    Laborde, L.; Mazon, D.; Moreau, D. [EURATOM-CEA Association (DSM-DRFC), CEA Cadarache, 13 - Saint Paul lez Durance (France); Moreau, D. [Culham Science Centre, EFDA-JET, Abingdon, OX (United Kingdom); Ariola, M. [EURATOM/ENEA/CREATE Association, Univ. Napoli Federico II, Napoli (Italy); Cordoliani, V. [Ecole Polytechnique, 91 - Palaiseau (France); Tala, T. [EURATOM-Tekes Association, VTT Processes (Finland)

    2005-07-01

    The simultaneous real-time control of the current and temperature gradient profiles could lead to the steady state sustainment of an internal transport barrier (ITB) and so to a stationary optimized plasma regime. Recent experiments in JET have demonstrated significant progress in achieving such a control: different current and temperature gradient target profiles have been reached and sustained for several seconds using a controller based on a static linear model. It's worth noting that the inverse safety factor profile evolves on a slow time scale (resistive time) while the normalized electron temperature gradient reacts on a faster one (confinement time). Moreover these experiments have shown that the controller was sensitive to rapid plasma events such as transient ITBs during the safety factor profile evolution or MHD instabilities which modify the pressure profiles on the confinement time scale. In order to take into account the different dynamics of the controlled profiles and to better react to rapid plasma events the control technique is being improved by using a multiple-time-scale approximation. The paper describes the theoretical analysis and closed-loop simulations using a control algorithm based on two-time-scale state-space model. These closed-loop simulations using the full dynamic but linear model used for the controller design to simulate the plasma response have demonstrated that this new controller allows the normalized electron temperature gradient target profile to be reached faster than the one used in previous experiments. (A.C.)

  12. A time-sorting pitfall trap and temperature datalogger for the sampling of surface-active arthropods

    Directory of Open Access Journals (Sweden)

    Marshall S. McMunn

    2017-04-01

    Full Text Available Nearly all arthropods display consistent patterns of activity according to time of day. These patterns of activity often limit the extent of animal co-occurrence in space and time. Quantifying when particular species are active and how activity varies with environmental conditions is difficult without the use of automated devices due to the need for continuous monitoring. Time-sorting pitfall traps passively collect active arthropods into containers with known beginning and end sample times. The trap described here, similar to previous designs, sorts arthropods by the time they fall into the trap using a rotating circular rack of vials. This trap represents a reduction in size, cost, and time of construction, while increasing the number of time windows sampled. The addition of temperature data collection extends functionality, while the use of store-bought components and inclusion of customizable software make the trap easy to reproduce and use.

  13. Sampling Methodologies for Epidemiologic Surveillance of Men Who Have Sex with Men and Transgender Women in Latin America: An Empiric Comparison of Convenience Sampling, Time Space Sampling, and Respondent Driven Sampling

    Science.gov (United States)

    Clark, J. L.; Konda, K. A.; Silva-Santisteban, A.; Peinado, J.; Lama, J. R.; Kusunoki, L.; Perez-Brumer, A.; Pun, M.; Cabello, R.; Sebastian, J. L.; Suarez-Ognio, L.; Sanchez, J.

    2014-01-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June–August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through TSS, and 127 through RDS. The TSS sample included the largest proportion of TW (30.7 %) and the lowest percentage of subjects who had previously participated in HIV/STI research (14.9 %). The prevalence of newly diagnosed HIV infection, according to participants’ self-reported previous HIV diagnosis, was highest among TSS recruits (17.9 %) compared with RDS (12.6 %) and CS (10.2 %). TSS identified diverse populations of MSM/TW with higher prevalences of HIV/STIs not accessed by other methods. PMID:24362754

  14. Sampling methodologies for epidemiologic surveillance of men who have sex with men and transgender women in Latin America: an empiric comparison of convenience sampling, time space sampling, and respondent driven sampling.

    Science.gov (United States)

    Clark, J L; Konda, K A; Silva-Santisteban, A; Peinado, J; Lama, J R; Kusunoki, L; Perez-Brumer, A; Pun, M; Cabello, R; Sebastian, J L; Suarez-Ognio, L; Sanchez, J

    2014-12-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June-August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through TSS, and 127 through RDS. The TSS sample included the largest proportion of TW (30.7 %) and the lowest percentage of subjects who had previously participated in HIV/STI research (14.9 %). The prevalence of newly diagnosed HIV infection, according to participants' self-reported previous HIV diagnosis, was highest among TSS recruits (17.9 %) compared with RDS (12.6 %) and CS (10.2 %). TSS identified diverse populations of MSM/TW with higher prevalences of HIV/STIs not accessed by other methods.

  15. A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.

    Science.gov (United States)

    Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun

    2013-06-01

    Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Real-time PCR to supplement gold-standard culture-based detection of Legionella in environmental samples.

    Science.gov (United States)

    Collins, S; Jorgensen, F; Willis, C; Walker, J

    2015-10-01

    Culture remains the gold-standard for the enumeration of environmental Legionella. However, it has several drawbacks including long incubation and poor sensitivity, causing delays in response times to outbreaks of Legionnaires' disease. This study aimed to validate real-time PCR assays to quantify Legionella species (ssrA gene), Legionella pneumophila (mip gene) and Leg. pneumophila serogroup-1 (wzm gene) to support culture-based detection in a frontline public health laboratory. Each qPCR assay had 100% specificity, excellent sensitivity (5 GU/reaction) and reproducibility. Comparison of the assays to culture-based enumeration of Legionella from 200 environmental samples showed that they had a negative predictive value of 100%. Thirty eight samples were positive for Legionella species by culture and qPCR. One hundred samples were negative by both methods, whereas 62 samples were negative by culture but positive by qPCR. The average log10 increase between culture and qPCR for Legionella spp. and Leg. pneumophila was 0·72 (P = 0·0002) and 0·51 (P = 0·006), respectively. The qPCR assays can be conducted on the same 1 l water sample as culture thus can be used as a supplementary technique to screen out negative samples and allow more rapid indication of positive samples. The assay could prove informative in public health investigations to identify or rule out sources of Legionella as well as to specifically identify Leg. pneumophila serogroup 1 in a timely manner not possible with culture. © 2015 The Society for Applied Microbiology.

  17. Single-shot echo-planar imaging of multiple sclerosis: effects of varying echo time

    International Nuclear Information System (INIS)

    Wolansky, L.J.; Chong, S.; Liu, W.C.; Kang, E.; Simpson, S.W.; Karimi, S.; Akbari, H.

    1999-01-01

    Our aim was to determine the relative merits of short and long echo times (TE) with single-shot echo-planar imaging for imaging cerebral lesions such as multiple sclerosis. We examined seven patients with clinically definite multiple sclerosis were imaged at 1.5 T. Patients were scanned with spin-echo, single-shot echo-planar imaging, using TEs of 45, 75, 105, and 135 ms. Region of interest (ROI) measurements were performed on 36 lesions at or above the level of the corona radiata. The mean image contrast (IC) was highest (231.1) for a TE of 45 ms, followed by 75 ms (218.9), 105 ms (217.9), and 135 ms (191.6). When mean contrast-to-noise ratios (C/N) were compared, the value was again highest (29.7) for TE 45 ms, followed by 75 ms (28.9), 105 ms (28.5), and 135 ms (26.3). In a lesion-by-lesion comparison, TE 45 ms had the highest IC and C/N in the largest number of cases (50 % and 47.2 %, respectively). IC and C/N for TE 45 ms were superior to those of 75 ms in 64 % and 58 %, respectively. These results support the use of relatively short TEs for single-shot echo-planar imaging in the setting of cerebral lesions such as multiple sclerosis. (orig.) (orig.)

  18. Multiple-Time-Scales Hierarchical Frequency Stability Control Strategy of Medium-Voltage Isolated Microgrid

    DEFF Research Database (Denmark)

    Zhao, Zhuoli; Yang, Ping; Guerrero, Josep M.

    2016-01-01

    In this paper, an islanded medium-voltage (MV) microgrid placed in Dongao Island is presented, which integrates renewable-energy-based distributed generations (DGs), energy storage system (ESS), and local loads. In an isolated microgrid without connection to the main grid to support the frequency......, it is more complex to control and manage. Thus in order to maintain the frequency stability in multiple-time-scales, a hierarchical control strategy is proposed. The proposed control architecture divides the system frequency in three zones: (A) stable zone, (B) precautionary zone and (C) emergency zone...... of Zone B. Theoretical analysis, time-domain simulation and field test results under various conditions and scenarios in the Dongao Island microgrid are presented to prove the validity of the introduced control strategy....

  19. Analysis of volatile organic compounds in compost samples: A potential tool to determine appropriate composting time.

    Science.gov (United States)

    Zhu, Fengxiang; Pan, Zaifa; Hong, Chunlai; Wang, Weiping; Chen, Xiaoyang; Xue, Zhiyong; Yao, Yanlai

    2016-12-01

    Changes in volatile organic compound contents in compost samples during pig manure composting were studied using a headspace, solid-phase micro-extraction method (HS-SPME) followed by gas chromatography with mass spectrometric detection (GC/MS). Parameters affecting the SPME procedure were optimized as follows: the coating was carbon molecular sieve/polydimethylsiloxane (CAR/PDMS) fiber, the temperature was 60°C and the time was 30min. Under these conditions, 87 compounds were identified from 17 composting samples. Most of the volatile components could only be detected before day 22. However, benzenes, alkanes and alkenes increased and eventually stabilized after day 22. Phenol and acid substances, which are important factors for compost quality, were almost undetectable on day 39 in natural compost (NC) samples and on day 13 in maggot-treated compost (MC) samples. Our results indicate that the approach can be effectively used to determine the composting times by analysis of volatile substances in compost samples. An appropriate composting time not only ensures the quality of compost and reduces the loss of composting material but also reduces the generation of hazardous substances. The appropriate composting times for MC and NC were approximately 22days and 40days, respectively, during the summer in Zhejiang. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Freeze core sampling to validate time-lapse resistivity monitoring of the hyporheic zone.

    Science.gov (United States)

    Toran, Laura; Hughes, Brian; Nyquist, Jonathan; Ryan, Robert

    2013-01-01

    A freeze core sampler was used to characterize hyporheic zone storage during a stream tracer test. The pore water from the frozen core showed tracer lingered in the hyporheic zone after the tracer had returned to background concentration in collocated well samples. These results confirmed evidence of lingering subsurface tracer seen in time-lapse electrical resistivity tomographs. The pore water exhibited brine exclusion (ion concentrations in ice lower than source water) in a sediment matrix, despite the fast freezing time. Although freeze core sampling provided qualitative evidence of lingering tracer, it proved difficult to quantify tracer concentration because the amount of brine exclusion during freezing could not be accurately determined. Nonetheless, the additional evidence for lingering tracer supports using time-lapse resistivity to detect regions of low fluid mobility within the hyporheic zone that can act as chemically reactive zones of importance in stream health. © 2012, The Author(s). GroundWater © 2012, National Ground Water Association.

  1. Global exponential stability for reaction-diffusion recurrent neural networks with multiple time varying delays

    International Nuclear Information System (INIS)

    Lou, X.; Cui, B.

    2008-01-01

    In this paper we consider the problem of exponential stability for recurrent neural networks with multiple time varying delays and reaction-diffusion terms. The activation functions are supposed to be bounded and globally Lipschitz continuous. By means of Lyapunov functional, sufficient conditions are derived, which guarantee global exponential stability of the delayed neural network. Finally, a numerical example is given to show the correctness of our analysis. (author)

  2. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages.

    Science.gov (United States)

    Peterman, William; Brocato, Emily R; Semlitsch, Raymond D; Eggert, Lori S

    2016-01-01

    In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (F ST and D C ) and isolation-by-distance (IBD) among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using D C , the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  3. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages

    Directory of Open Access Journals (Sweden)

    William Peterman

    2016-03-01

    Full Text Available In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (FST and DC and isolation-by-distance (IBD among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using DC, the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  4. Multiple periodic solutions for a discrete time model of plankton allelopathy

    OpenAIRE

    Zhang Jianbao; Fang Hui

    2006-01-01

    We study a discrete time model of the growth of two species of plankton with competitive and allelopathic effects on each other N1(k+1) = N1(k)exp{r1(k)-a11(k)N1(k)-a12(k)N2(k)-b1(k)N1(k)N2(k)}, N2(k+1) = N2(k)exp{r2(k)-a21(k)N2(k)-b2(k)N1(k)N1(k)N2(k)}. A set of sufficient conditions is obtained for the existence of multiple positive periodic solutions for this model. The approach is based on Mawhin's continuation theorem of coincidence degree theory as well as some a priori estimates. Some...

  5. Characterization of halogenated DBPs and identification of new DBPs trihalomethanols in chlorine dioxide treated drinking water with multiple extractions.

    Science.gov (United States)

    Han, Jiarui; Zhang, Xiangru; Liu, Jiaqi; Zhu, Xiaohu; Gong, Tingting

    2017-08-01

    Chlorine dioxide (ClO 2 ) is a widely used alternative disinfectant due to its high biocidal efficiency and low-level formation of trihalomethanes and haloacetic acids. A major portion of total organic halogen (TOX), a collective parameter for all halogenated DBPs, formed in ClO 2 -treated drinking water is still unknown. A commonly used pretreatment method for analyzing halogenated DBPs in drinking water is one-time liquid-liquid extraction (LLE), which may lead to a substantial loss of DBPs prior to analysis. In this study, characterization and identification of polar halogenated DBPs in a ClO 2 -treated drinking water sample were conducted by pretreating the sample with multiple extractions. Compared to one-time LLE, the combined four-time LLEs improved the recovery of TOX by 2.3 times. The developmental toxicity of the drinking water sample pretreated with the combined four-time LLEs was 1.67 times higher than that pretreated with one-time LLE. With the aid of ultra-performance liquid chromatography/electrospray ionization-triple quadrupole mass spectrometry, a new group of polar halogenated DBPs, trihalomethanols, were detected in the drinking water sample pretreated with multiple extractions; two of them, trichloromethanol and bromodichloromethanol, were identified with synthesized standard compounds. Moreover, these trihalomethanols were found to be the transformation products of trihalomethanes formed during ClO 2 disinfection. The results indicate that multiple LLEs can significantly improve extraction efficiencies of polar halogenated DBPs and is a better pretreatment method for characterizing and identifying new polar halogenated DBPs in drinking water. Copyright © 2017. Published by Elsevier B.V.

  6. Time-correlated pulse-height measurements of low-multiplying nuclear materials

    Energy Technology Data Exchange (ETDEWEB)

    Miller, E.C., E-mail: ericcm@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Dolan, J.L.; Clarke, S.D.; Pozzi, S.A. [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Tomanin, A.; Peerani, P. [European Commission EC-JRC-IPSC, Ispra (Italy); Marleau, P. [Sandia National Laboratories, Livermore, CA (United States); Mattingly, J.K. [North Carolina State University, Raleigh, NC (United States)

    2013-11-21

    Methods for the determination of the subcritical neutron multiplication of nuclear materials are of interest in the field of nuclear nonproliferation and safeguards. A series of measurements were performed at the Joint Research Center facility in Ispra, Italy to investigate the possibility of using a time-correlated pulse-height (TCPH) analysis to estimate the sub-critical multiplication of nuclear material. The objective of the measurements was to evaluate the effectiveness of this technique, and to benchmark the simulation capabilities of MCNPX-PoliMi/MPPost. In this campaign, two low-multiplication samples were measured: a 1-kg mixed oxide (MOX) powder sample and several low-mass plutonium–gallium (PuGa) disks. The measured results demonstrated that the sensitivity of the TCPH technique could not clearly distinguish samples with very-low levels of multiplication. However, the simulated TCPH distributions agree well with the measured data, within 12% for all cases, validating the simulation capabilities of MCNPX-PoliMi/MPPost. To investigate the potential of the TCPH method for identifying high-multiplication samples, the validated MCNPX-PoliMi/MPPost codes were used to simulate sources of higher multiplications. Lastly, a characterization metric, the cumulative region integral (CRI), was introduced to estimate the level of multiplication in a source. However, this response was shown to be insensitive over the range of multiplications of interest. -- Highlights: •Present results of measurements of MOX fuel and PuGa disks. •Compared measurement results to simulations performed using MCNPX-Polimi and MPPost. •Investigated using correlated γ–n pairs to determine the multiplication of a system.

  7. Closed-Loop Surface Related Multiple Estimation

    NARCIS (Netherlands)

    Lopez Angarita, G.A.

    2016-01-01

    Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, in order to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data

  8. Application of Multiple Evaluation Models in Brazil

    Directory of Open Access Journals (Sweden)

    Rafael Victal Saliba

    2008-07-01

    Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by finance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main financial institutions in Brazil, we find that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.

  9. Focusing of light energy inside a scattering medium by controlling the time-gated multiple light scattering

    Science.gov (United States)

    Jeong, Seungwon; Lee, Ye-Ryoung; Choi, Wonjun; Kang, Sungsam; Hong, Jin Hee; Park, Jin-Sung; Lim, Yong-Sik; Park, Hong-Gyu; Choi, Wonshik

    2018-05-01

    The efficient delivery of light energy is a prerequisite for the non-invasive imaging and stimulating of target objects embedded deep within a scattering medium. However, the injected waves experience random diffusion by multiple light scattering, and only a small fraction reaches the target object. Here, we present a method to counteract wave diffusion and to focus multiple-scattered waves at the deeply embedded target. To realize this, we experimentally inject light into the reflection eigenchannels of a specific flight time to preferably enhance the intensity of those multiple-scattered waves that have interacted with the target object. For targets that are too deep to be visible by optical imaging, we demonstrate a more than tenfold enhancement in light energy delivery in comparison with ordinary wave diffusion cases. This work will lay a foundation to enhance the working depth of imaging, sensing and light stimulation.

  10. Algebraic Approaches to Space-Time Code Construction for Multiple-Antenna Communication

    OpenAIRE

    Raviteja, U; Sharanappa, I; Vanamali, B; Kumar, Vijay P

    2011-01-01

    A major challenge in wireless communications is overcoming the deleterious effects of fading, a phenomenon largely responsible for the seemingly inevitable dropped call. Multiple-antennas communication systems, commonly referred to as MIMO systems, employ multiple antennas at both transmitter and receiver, thereby creating a multitude of signalling pathways between transmitter and receiver. These multiple pathways give the signal a diversity advantage with which to combat fading. Apart fro...

  11. Transit Timing Observations from Kepler: III. Confirmation of 4 Multiple Planet Systems by a Fourier-Domain Study of Anti-correlated Transit Timing Variations

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Jason H.; /Fermilab; Fabrycky, Daniel C.; /Lick Observ.; Ford, Eric B.; /Florida U.; Carter, Joshua A.; /Harvard-Smithsonian Ctr. Astrophys.; Fressin, Francois; /Harvard-Smithsonian Ctr. Astrophys.; Holman, Matthew J.; /Harvard-Smithsonian Ctr. Astrophys.; Lissauer, Jack J.; /NASA, Ames; Rowe, Jason F.; /SETI Inst., Mtn. View /NASA, Ames; Ragozzine, Darin; /Harvard-Smithsonian Ctr. Astrophys.; Welsh, William F.; /Caltech; Borucki, William J.; /NASA, Ames /UC, Santa Barbara

    2012-01-01

    We present a method to confirm the planetary nature of objects in systems with multiple transiting exoplanet candidates. This method involves a Fourier-domain analysis of the deviations in the transit times from a constant period that result from dynamical interactions within the system. The combination of observed anticorrelations in the transit times and mass constraints from dynamical stability allow us to claim the discovery of four planetary systems, Kepler-25, Kepler-26, Kepler-27 and Kepler-28, containing eight planets and one additional planet candidate.

  12. Using multiple sampling approaches to measure sexual risk-taking among young people in Haiti: programmatic implications.

    Science.gov (United States)

    Speizer, Ilene S; Beauvais, Harry; Gómez, Anu Manchikanti; Outlaw, Theresa Finn; Roussel, Barbara

    2009-12-01

    No previous published research has examined the applicability of varying methods for identifying young people who are at high risk of experiencing unintended pregnancy and acquiring HIV infection. This study compares three surveys of young people aged 15-24 in Port-au-Prince, Haiti, in terms of their sociodemographic characteristics and sexual behaviors and the surveys'usefulness for identifying young people at high risk and for program planning. The surveys consist of responses from: a representative sample of young people in the 2005-06 Haiti Demographic and Health Survey (HDHS), a 2004 facility-based study, and a 2006-07 venue-based study that used the Priorities for Local AIDS Control Efforts (PLACE) method. The facility-based and PLACE studies included larger proportions of single, sexually experienced young people and people who knew someone with HIV/ AIDS than did the HDHS. More respondents in the PLACE sample had multiple sex partners in the past year and received money or gifts in return for sex, compared with respondents in the facility study. At first and last sex, more PLACE respondents used contraceptives, including condoms. Experience of pregnancy was most commonly reported in the data from the facility-based sample; however, more ever-pregnant PLACE respondents than others reported ever having terminated a pregnancy. Program managers seeking to implement prevention activities should consider using facility- or venue-based methods to identify and understand the behaviors of young people at high risk.

  13. Sampling challenges in a study examining refugee resettlement.

    Science.gov (United States)

    Sulaiman-Hill, Cheryl Mr; Thompson, Sandra C

    2011-03-15

    As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. A sample of 193 former refugee participants was recruited in Christchurch (n = 98) and Perth (n = 95), 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48%) was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Snowball sampling, with multiple initiation points to reduce selection bias, was necessary to locate and identify participants, provide reassurance and

  14. Sampling challenges in a study examining refugee resettlement

    Directory of Open Access Journals (Sweden)

    Thompson Sandra C

    2011-03-01

    Full Text Available Abstract Background As almost half of all refugees currently under United Nations protection are from Afghanistan or Iraq and significant numbers have already been resettled outside the region of origin, it is likely that future research will examine their resettlement needs. A number of methodological challenges confront researchers working with culturally and linguistically diverse groups; however, few detailed articles are available to inform other studies. The aim of this paper is to outline challenges with sampling and recruitment of socially invisible refugee groups, describing the method adopted for a mixed methods exploratory study assessing mental health, subjective wellbeing and resettlement perspectives of Afghan and Kurdish refugees living in New Zealand and Australia. Sampling strategies used in previous studies with similar refugee groups were considered before determining the approach to recruitment Methods A snowball approach was adopted for the study, with multiple entry points into the communities being used to choose as wide a range of people as possible to provide further contacts and reduce selection bias. Census data was used to assess the representativeness of the sample. Results A sample of 193 former refugee participants was recruited in Christchurch (n = 98 and Perth (n = 95, 47% were of Afghan and 53% Kurdish ethnicity. A good gender balance (males 52%, females 48% was achieved overall, mainly as a result of the sampling method used. Differences in the demographic composition of groups in each location were observed, especially in relation to the length of time spent in a refugee situation and time since arrival, reflecting variations in national humanitarian quota intakes. Although some measures were problematic, Census data comparison to assess reasonable representativeness of the study sample was generally reassuring. Conclusions Snowball sampling, with multiple initiation points to reduce selection bias, was

  15. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Landsmeer, S.; Kruszynski, K.J.; Antwerpen, G. van; Dijk, J.

    2013-01-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time

  16. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    Science.gov (United States)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  17. Multiple Charging Station Location-Routing Problem with Time Window of Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Wang Li-ying

    2015-11-01

    Full Text Available This paper presents the electric vehicle (EV multiple charging station location-routing problem with time window to optimize the routing plan of capacitated EVs and the strategy of charging stations. In particular, the strategy of charging stations includes both infrastructure-type selection and station location decisions. The problem accounts for two critical constraints in logistic practice: the vehicle loading capacity and the customer time windows. A hybrid heuristic that incorporates an adaptive variable neighborhood search (AVNS with the tabu search algorithm for intensification was developed to address the problem. The specialized neighborhood structures and the selection methods of charging station used in the shaking step of AVNS were proposed. In contrast to the commercial solver CPLEX, experimental results on small-scale test instances demonstrate that the algorithm can find nearly optimal solutions on small-scale instances. The results on large-scale instances also show the effectiveness of the algorithm.

  18. Relationship between single and multiple perpetrator rape perpetration in South Africa: A comparison of risk factors in a population-based sample.

    Science.gov (United States)

    R, Jewkes; Y, Sikweyiya; K, Dunkle; R, Morrell

    2015-07-07

    Studies of rape of women seldom distinguish between men's participation in acts of single and multiple perpetrator rape. Multiple perpetrator rape (MPR) occurs globally with serious consequences for women. In South Africa it is a cultural practice with defined circumstances in which it commonly occurs. Prevention requires an understanding of whether it is a context specific intensification of single perpetrator rape, or a distinctly different practice of different men. This paper aims to address this question. We conducted a cross-sectional household study with a multi-stage, randomly selected sample of 1686 men aged 18-49 who completed a questionnaire administered using an Audio-enhanced Personal Digital Assistant. We attempted to fit an ordered logistic regression model for factors associated with rape perpetration. 27.6 % of men had raped and 8.8 % had perpetrated multiple perpetrator rape (MPR). Thus 31.9 % of men who had ever raped had done so with other perpetrators. An ordered regression model was fitted, showing that the same associated factors, albeit at higher prevalence, are associated with SPR and MPR. Multiple perpetrator rape appears as an intensified form of single perpetrator rape, rather than a different form of rape. Prevention approaches need to be mainstreamed among young men.

  19. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  20. [Correlation between demyelinating lesions and executive function decline in a sample of Mexican patients with multiple sclerosis].

    Science.gov (United States)

    Aldrete Cortez, V R; Duriez-Sotelo, E; Carrillo-Mora, P; Pérez-Zuno, J A

    2013-09-01

    Multiple Sclerosis (MS) is characterised by several neurological symptoms including cognitive impairment, which has recently been the subject of considerable study. At present, evidence pointing to a correlation between lesion characteristics and specific cognitive impairment is not conclusive. To investigate the presence of a correlation between the characteristics of demyelinating lesions and performance of basic executive functions in a sample of MS patients. We included 21 adult patients with scores of 0 to 5 on the Kurtzke scale and no exacerbations of the disease in at least 3 months prior to the evaluation date. They completed the Stroop test and the Wisconsin Card Sorting Test (WCST). The location of the lesions was determined using magnetic resonance imaging (MRI) performed by a blinded expert in neuroimaging. Demyelinating lesions were more frequently located in the frontal and occipital lobes. The Stroop test showed that as cognitive demand increased on each of the sections in the test, reaction time and number of errors increased. On the WCST, 33.33% of patients registered as having moderate cognitive impairment. No correlation could be found between demyelinating lesion characteristics (location, size, and number) and patients' scores on the tests. Explanations of the causes of cognitive impairment in MS should examine a variety of biological, psychological, and social factors instead of focusing solely on demyelinating lesions. Copyright © 2012 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  1. Highly Reproducible Automated Proteomics Sample Preparation Workflow for Quantitative Mass Spectrometry.

    Science.gov (United States)

    Fu, Qin; Kowalski, Michael P; Mastali, Mitra; Parker, Sarah J; Sobhani, Kimia; van den Broek, Irene; Hunter, Christie L; Van Eyk, Jennifer E

    2018-01-05

    Sample preparation for protein quantification by mass spectrometry requires multiple processing steps including denaturation, reduction, alkylation, protease digestion, and peptide cleanup. Scaling these procedures for the analysis of numerous complex biological samples can be tedious and time-consuming, as there are many liquid transfer steps and timed reactions where technical variations can be introduced and propagated. We established an automated sample preparation workflow with a total processing time for 96 samples of 5 h, including a 2 h incubation with trypsin. Peptide cleanup is accomplished by online diversion during the LC/MS/MS analysis. In a selected reaction monitoring (SRM) assay targeting 6 plasma biomarkers and spiked β-galactosidase, mean intraday and interday cyclic voltammograms (CVs) for 5 serum and 5 plasma samples over 5 days were samples repeated on 3 separate days had total CVs below 20%. Similar results were obtained when the workflow was transferred to a second site: 93% of peptides had CVs below 20%. An automated trypsin digestion workflow yields uniformly processed samples in less than 5 h. Reproducible quantification of peptides was observed across replicates, days, instruments, and laboratory sites, demonstrating the broad applicability of this approach.

  2. Complex, non-monotonic dose-response curves with multiple maxima: Do we (ever) sample densely enough?

    Science.gov (United States)

    Cvrčková, Fatima; Luštinec, Jiří; Žárský, Viktor

    2015-01-01

    We usually expect the dose-response curves of biological responses to quantifiable stimuli to be simple, either monotonic or exhibiting a single maximum or minimum. Deviations are often viewed as experimental noise. However, detailed measurements in plant primary tissue cultures (stem pith explants of kale and tobacco) exposed to varying doses of sucrose, cytokinins (BA or kinetin) or auxins (IAA or NAA) revealed that growth and several biochemical parameters exhibit multiple reproducible, statistically significant maxima over a wide range of exogenous substance concentrations. This results in complex, non-monotonic dose-response curves, reminiscent of previous reports of analogous observations in both metazoan and plant systems responding to diverse pharmacological treatments. These findings suggest the existence of a hitherto neglected class of biological phenomena resulting in dose-response curves exhibiting periodic patterns of maxima and minima, whose causes remain so far uncharacterized, partly due to insufficient sampling frequency used in many studies.

  3. Sparse-sampling with time-encoded (TICO) stimulated Raman scattering for fast image acquisition

    Science.gov (United States)

    Hakert, Hubertus; Eibl, Matthias; Karpf, Sebastian; Huber, Robert

    2017-07-01

    Modern biomedical imaging modalities aim to provide researchers a multimodal contrast for a deeper insight into a specimen under investigation. A very promising technique is stimulated Raman scattering (SRS) microscopy, which can unveil the chemical composition of a sample with a very high specificity. Although the signal intensities are enhanced manifold to achieve a faster acquisition of images if compared to standard Raman microscopy, there is a trade-off between specificity and acquisition speed. Commonly used SRS concepts either probe only very few Raman transitions as the tuning of the applied laser sources is complicated or record whole spectra with a spectrometer based setup. While the first approach is fast, it reduces the specificity and the spectrometer approach records whole spectra -with energy differences where no Raman information is present-, which limits the acquisition speed. Therefore, we present a new approach based on the TICO-Raman concept, which we call sparse-sampling. The TICO-sparse-sampling setup is fully electronically controllable and allows probing of only the characteristic peaks of a Raman spectrum instead of always acquiring a whole spectrum. By reducing the spectral points to the relevant peaks, the acquisition time can be greatly reduced compared to a uniformly, equidistantly sampled Raman spectrum while the specificity and the signal to noise ratio (SNR) are maintained. Furthermore, all laser sources are completely fiber based. The synchronized detection enables a full resolution of the Raman signal, whereas the analogue and digital balancing allows shot noise limited detection. First imaging results with polystyrene (PS) and polymethylmethacrylate (PMMA) beads confirm the advantages of TICO sparse-sampling. We achieved a pixel dwell time as low as 35 μs for an image differentiating both species. The mechanical properties of the applied voice coil stage for scanning the sample currently limits even faster acquisition.

  4. A 16-channel real-time digital processor for pulse-shape discrimination in multiplicity assay

    International Nuclear Information System (INIS)

    Joyce, Malcolm J.; Aspinall, M.D.; Cave, F.D.; Lavietes, A.

    2013-06-01

    In recent years, real-time neutron/γ-ray pulse-shape discrimination has become feasible for use with scintillator-based detectors that respond extremely quickly, on the order of 25 ns in terms of pulse width, and their application to a variety of nuclear material assays has been reported. For the in-situ analysis of nuclear materials, measurements are often based on the multiplicity assessment of spontaneous fission events. An example of this is the 240 Pu eff assessment stemming from long-established techniques developed for 3 He-based neutron coincidence counters when 3 He was abundant and cheap. However, such measurements when using scintillator detectors can be plagued by low detection efficiencies and low orders of coincidence (often limited to triples) if the number of detectors in use is similarly limited to 3-4 detectors. Conversely, an array of >10 detector modules arranged to optimize efficiency and multiplicity sensitivity, shifts the emphasis in terms of performance requirement to the real-time digital analyzer and, critically, to the scope remaining in the temporal processing window of these systems. In this paper we report on the design, development and commissioning of a bespoke, 16-channel real-time pulse-shape discrimination analyzer specified for the materials assay challenge summarized above. The analyzer incorporates 16 dedicated and independent high-voltage supplies along with 16 independent digital processing channels offering pulse-shape discrimination at a rate of 3 x 10 6 events per second. These functions are configured from a dedicated graphical user interface, and all settings can be adjusted on-the-fly with the analyzer effectively configured one-time-only (where desired) for subsequent plug-and-play connection, for example to a fuel bundle organic scintillation detector array. (authors)

  5. Intensified Sampling in Response to a Salmonella Heidelberg Outbreak Associated with Multiple Establishments Within a Single Poultry Corporation.

    Science.gov (United States)

    Green, Alice; Defibaugh-Chavez, Stephanie; Douris, Aphrodite; Vetter, Danah; Atkinson, Richard; Kissler, Bonnie; Khroustalev, Allison; Robertson, Kis; Sharma, Yudhbir; Becker, Karen; Dessai, Uday; Antoine, Nisha; Allen, Latasha; Holt, Kristin; Gieraltowski, Laura; Wise, Matthew; Schwensohn, Colin

    2018-03-01

    On June 28, 2013, the Food Safety and Inspection Service (FSIS) was notified by the Centers for Disease Control and Prevention (CDC) of an investigation of a multistate cluster of illnesses of Salmonella enterica serovar Heidelberg. Since case-patients in the cluster reported consumption of a variety of chicken products, FSIS used a simple likelihood-based approach using traceback information to focus on intensified sampling efforts. This article describes the multiphased product sampling approach taken by FSIS when epidemiologic evidence implicated chicken products from multiple establishments operating under one corporation. The objectives of sampling were to (1) assess process control of chicken slaughter and further processing and (2) determine whether outbreak strains were present in products from these implicated establishments. As part of the sample collection process, data collected by FSIS personnel to characterize product included category (whole chicken and type of chicken parts), brand, organic or conventional product, injection with salt solutions or flavorings, and whether product was skinless or skin-on. From the period September 9, 2013, through October 31, 2014, 3164 samples were taken as part of this effort. Salmonella percent positive declined from 19.7% to 5.3% during this timeframe as a result of regulatory and company efforts. The results of intensified sampling for this outbreak investigation informed an FSIS regulatory response and corrective actions taken by the implicated establishments. The company noted that a multihurdle approach to reduce Salmonella in products was taken, including on-farm efforts such as environmental testing, depopulation of affected flocks, disinfection of affected houses, vaccination, and use of various interventions within the establishments over the course of several months.

  6. Modification of a two blood sample method used for measurement of GFR with 99mTc-DTPA.

    Science.gov (United States)

    Surma, Marian J; Płachcińska, Anna; Kuśmierek, Jacek

    2018-01-01

    Measurements of GFR may be performed with a slope/intercept method (S/I), using only two blood samples taken in strictly defined time points. The aim of the study was to modify this method in order to extend time intervals suitable for blood sampling. Modification was based on a variation of a Russel et al. model parameter, selection of time intervals suitable for blood sampling and assessment of uncertainty of calculated results. Archived values of GFR measurements of 169 patients with different renal function, from 5.5 to 179 mL/min, calculated with a multiple blood sample method were used. Concentrations of a radiopharmaceutical in consecutive minutes, from 60th to 190th after injection, were calculated theoretically, using archived parameters of biexponential functions describing a decrease in 99mTc-DTPA concentration in blood plasma with time. These values, together with injected activities, were treated as measurements and used for S/I clearance calculations. Next, values of S/I clearance were compared with the multiple blood sample method in order to calculate suitable values of exponent present in a Russel's model, for every combination of two blood sampling time points. A model was considered accurately fitted to measured values when SEE ≤ 3.6 mL/min. Assessments of uncertainty of obtained results were based on law of error superposition, taking into account mean square prediction error and also errors introduced by pipetting, time measurement and stochastic radioactive decay. The accepted criteria resulted in extension of time intervals suitable for blood sampling to: between 60 and 90 minutes after injection for the first sample and between 150 and 180 minutes for the second sample. Uncertainty of results was assessed as between 4 mL/min for GFR = 5-10 mL/min and 8 mL/min for GFR = 180 mL/min. Time intervals accepted for blood sampling fully satisfy nuclear medicine staff and ensure proper determination of GFR. Uncertainty of results is entirely

  7. Time-varying BRDFs.

    Science.gov (United States)

    Sun, Bo; Sunkavalli, Kalyan; Ramamoorthi, Ravi; Belhumeur, Peter N; Nayar, Shree K

    2007-01-01

    The properties of virtually all real-world materials change with time, causing their bidirectional reflectance distribution functions (BRDFs) to be time varying. However, none of the existing BRDF models and databases take time variation into consideration; they represent the appearance of a material at a single time instance. In this paper, we address the acquisition, analysis, modeling, and rendering of a wide range of time-varying BRDFs (TVBRDFs). We have developed an acquisition system that is capable of sampling a material's BRDF at multiple time instances, with each time sample acquired within 36 sec. We have used this acquisition system to measure the BRDFs of a wide range of time-varying phenomena, which include the drying of various types of paints (watercolor, spray, and oil), the drying of wet rough surfaces (cement, plaster, and fabrics), the accumulation of dusts (household and joint compound) on surfaces, and the melting of materials (chocolate). Analytic BRDF functions are fit to these measurements and the model parameters' variations with time are analyzed. Each category exhibits interesting and sometimes nonintuitive parameter trends. These parameter trends are then used to develop analytic TVBRDF models. The analytic TVBRDF models enable us to apply effects such as paint drying and dust accumulation to arbitrary surfaces and novel materials.

  8. Multiple-parameter bifurcation analysis in a Kuramoto model with time delay and distributed shear

    Science.gov (United States)

    Niu, Ben; Zhang, Jiaming; Wei, Junjie

    2018-05-01

    In this paper, time delay effect and distributed shear are considered in the Kuramoto model. On the Ott-Antonsen's manifold, through analyzing the associated characteristic equation of the reduced functional differential equation, the stability boundary of the incoherent state is derived in multiple-parameter space. Moreover, very rich dynamical behavior such as stability switches inducing synchronization switches can occur in this equation. With the loss of stability, Hopf bifurcating coherent states arise, and the criticality of Hopf bifurcations is determined by applying the normal form theory and the center manifold theorem. On one hand, theoretical analysis indicates that the width of shear distribution and time delay can both eliminate the synchronization then lead the Kuramoto model to incoherence. On the other, time delay can induce several coexisting coherent states. Finally, some numerical simulations are given to support the obtained results where several bifurcation diagrams are drawn, and the effect of time delay and shear is discussed.

  9. Time dependent response of low velocity impact induced composite conical shells under multiple delamination

    Science.gov (United States)

    Dey, Sudip; Karmakar, Amit

    2014-02-01

    This paper presents the time dependent response of multiple delaminated angle-ply composite pretwisted conical shells subjected to low velocity normal impact. The finite element formulation is based on Mindlin's theory incorporating rotary inertia and effects of transverse shear deformation. An eight-noded isoparametric plate bending element is employed to satisfy the compatibility of deformation and equilibrium of resultant forces and moments at the delamination crack front. A multipoint constraint algorithm is incorporated which leads to asymmetric stiffness matrices. The modified Hertzian contact law which accounts for permanent indentation is utilized to compute the contact force, and the time dependent equations are solved by Newmark's time integration algorithm. Parametric studies are conducted with respect to triggering parameters like laminate configuration, location of delamination, angle of twist, velocity of impactor, and impactor's displacement for centrally impacted shells.

  10. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Clinical time series prediction: towards a hierarchical dynamical system framework

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive

  12. Sampling inspection for the evaluation of time-dependent reliability of deteriorating systems under imperfect defect detection

    International Nuclear Information System (INIS)

    Kuniewski, Sebastian P.; Weide, Johannes A.M. van der; Noortwijk, Jan M. van

    2009-01-01

    The paper presents a sampling-inspection strategy for the evaluation of time-dependent reliability of deteriorating systems, where the deterioration is assumed to initiate at random times and at random locations. After initiation, defects are weakening the system's resistance. The system becomes unacceptable when at least one defect reaches a critical depth. The defects are assumed to initiate at random times modeled as event times of a non-homogeneous Poisson process (NHPP) and to develop according to a non-decreasing time-dependent gamma process. The intensity rate of the NHPP is assumed to be a combination of a known time-dependent shape function and an unknown proportionality constant. When sampling inspection (i.e. inspection of a selected subregion of the system) results in a number of defect initiations, Bayes' theorem can be used to update prior beliefs about the proportionality constant of the NHPP intensity rate to the posterior distribution. On the basis of a time- and space-dependent Poisson process for the defect initiation, an adaptive Bayesian model for sampling inspection is developed to determine the predictive probability distribution of the time to failure. A potential application is, for instance, the inspection of a large vessel or pipeline suffering pitting/localized corrosion in the oil industry. The possibility of imperfect defect detection is also incorporated in the model.

  13. Direct sampling during multiple sediment density flows reveals dynamic sediment transport and depositional environment in Monterey submarine canyon

    Science.gov (United States)

    Maier, K. L.; Gales, J. A.; Paull, C. K.; Gwiazda, R.; Rosenberger, K. J.; McGann, M.; Lundsten, E. M.; Anderson, K.; Talling, P.; Xu, J.; Parsons, D. R.; Barry, J.; Simmons, S.; Clare, M. A.; Carvajal, C.; Wolfson-Schwehr, M.; Sumner, E.; Cartigny, M.

    2017-12-01

    Sediment density flows were directly sampled with a coupled sediment trap-ADCP-instrument mooring array to evaluate the character and frequency of turbidity current events through Monterey Canyon, offshore California. This novel experiment aimed to provide links between globally significant sediment density flow processes and their resulting deposits. Eight to ten Anderson sediment traps were repeatedly deployed at 10 to 300 meters above the seafloor on six moorings anchored at 290 to 1850 meters water depth in the Monterey Canyon axial channel during 6-month deployments (October 2015 - April 2017). Anderson sediment traps include a funnel and intervalometer (discs released at set time intervals) above a meter-long tube, which preserves fine-scale stratigraphy and chronology. Photographs, multi-sensor logs, CT scans, and grain size analyses reveal layers from multiple sediment density flow events that carried sediment ranging from fine sand to granules. More sediment accumulation from sediment density flows, and from between flows, occurred in the upper canyon ( 300 - 800 m water depth) compared to the lower canyon ( 1300 - 1850 m water depth). Sediment accumulated in the traps during sediment density flows is sandy and becomes finer down-canyon. In the lower canyon where sediment directly sampled from density flows are clearly distinguished within the trap tubes, sands have sharp basal contacts, normal grading, and muddy tops that exhibit late-stage pulses. In at least two of the sediment density flows, the simultaneous low velocity and high backscatter measured by the ADCPs suggest that the trap only captured the collapsing end of a sediment density flow event. In the upper canyon, accumulation between sediment density flow events is twice as fast compared to the lower canyon; it is characterized by sub-cm-scale layers in muddy sediment that appear to have accumulated with daily to sub-daily frequency, likely related to known internal tidal dynamics also measured

  14. Determination of Acyclovir in Human Plasma Samples by HPLC Method with UV Detection: Application to Single-Dose Pharmacokinetic Study

    Directory of Open Access Journals (Sweden)

    Dragica Zendelovska

    2015-03-01

    CONCLUSION: Good precision, accuracy, simplicity, sensitivity and shorter time of analysis of the method makes it particularly useful for processing of multiple samples in a limited period of time for pharmacokinetic study of acyclovir.

  15. Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.

    Science.gov (United States)

    Franconeri, S L; Jonathan, S V; Scimeca, J M

    2010-07-01

    In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.

  16. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  17. Replica exchange enveloping distribution sampling (RE-EDS): A robust method to estimate multiple free-energy differences from a single simulation.

    Science.gov (United States)

    Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina

    2016-10-21

    In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.

  18. The influence of multiple goals on driving behavior: the case of safety, time saving, and fuel saving.

    Science.gov (United States)

    Dogan, Ebru; Steg, Linda; Delhomme, Patricia

    2011-09-01

    Due to the innate complexity of the task drivers have to manage multiple goals while driving and the importance of certain goals may vary over time leading to priority being given to different goals depending on the circumstances. This study aimed to investigate drivers' behavioral regulation while managing multiple goals during driving. To do so participants drove on urban and rural roads in a driving simulator while trying to manage fuel saving and time saving goals, besides the safety goals that are always present during driving. A between-subjects design was used with one group of drivers managing two goals (safety and fuel saving) and another group managing three goals (safety, fuel saving, and time saving) while driving. Participants were provided continuous feedback on the fuel saving goal via a meter on the dashboard. The results indicate that even when a fuel saving or time saving goal is salient, safety goals are still given highest priority when interactions with other road users take place and when interacting with a traffic light. Additionally, performance on the fuel saving goal diminished for the group that had to manage fuel saving and time saving together. The theoretical implications for a goal hierarchy in driving tasks and practical implications for eco-driving are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  20. Modeling multiple time series annotations as noisy distortions of the ground truth: An Expectation-Maximization approach.

    Science.gov (United States)

    Gupta, Rahul; Audhkhasi, Kartik; Jacokes, Zach; Rozga, Agata; Narayanan, Shrikanth

    2018-01-01

    Studies of time-continuous human behavioral phenomena often rely on ratings from multiple annotators. Since the ground truth of the target construct is often latent, the standard practice is to use ad-hoc metrics (such as averaging annotator ratings). Despite being easy to compute, such metrics may not provide accurate representations of the underlying construct. In this paper, we present a novel method for modeling multiple time series annotations over a continuous variable that computes the ground truth by modeling annotator specific distortions. We condition the ground truth on a set of features extracted from the data and further assume that the annotators provide their ratings as modification of the ground truth, with each annotator having specific distortion tendencies. We train the model using an Expectation-Maximization based algorithm and evaluate it on a study involving natural interaction between a child and a psychologist, to predict confidence ratings of the children's smiles. We compare and analyze the model against two baselines where: (i) the ground truth in considered to be framewise mean of ratings from various annotators and, (ii) each annotator is assumed to bear a distinct time delay in annotation and their annotations are aligned before computing the framewise mean.

  1. On the Discrete-Time GeoX/G/1 Queues under N-Policy with Single and Multiple Vacations

    Directory of Open Access Journals (Sweden)

    Sung J. Kim

    2013-01-01

    Full Text Available We consider the discrete-time GeoX/G/1 queue under N-policy with single and multiple vacations. In this queueing system, the server takes multiple vacations and a single vacation whenever the system becomes empty and begins to serve customers only if the queue length is at least a predetermined threshold value N. Using the well-known property of stochastic decomposition, we derive the stationary queue-length distributions for both vacation models in a simple and unified manner. In addition, we derive their busy as well as idle-period distributions. Some classical vacation models are considered as special cases.

  2. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  3. More Poop, More Precision: Improving Epidemiologic Surveillance of Soil-Transmitted Helminths with Multiple Fecal Sampling using the Kato-Katz Technique.

    Science.gov (United States)

    Liu, Chengfang; Lu, Louise; Zhang, Linxiu; Bai, Yu; Medina, Alexis; Rozelle, Scott; Smith, Darvin Scott; Zhou, Changhai; Zang, Wei

    2017-09-01

    Soil-transmitted helminths, or parasitic intestinal worms, are among the most prevalent and geographically widespread parasitic infections in the world. Accurate diagnosis and quantification of helminth infection are critical for informing and assessing deworming interventions. The Kato-Katz thick smear technique, the most widely used laboratory method to quantitatively assess infection prevalence and infection intensity of helminths, has often been compared with other methods. Only a few small-scale studies, however, have considered ways to improve its diagnostic sensitivity. This study, conducted among 4,985 school-age children in an area of rural China with moderate prevalence of helminth infection, examines the effect on diagnostic sensitivity of the Kato-Katz technique when two fecal samples collected over consecutive days are examined and compared with a single sample. A secondary aim was to consider cost-effectiveness by calculating an estimate of the marginal costs of obtaining an additional fecal sample. Our findings show that analysis of an additional fecal sample led to increases of 23%, 26%, and 100% for Ascaris lumbricoides, Trichuris trichiura , and hookworm prevalence, respectively. The cost of collecting a second fecal sample for our study population was approximately USD4.60 per fecal sample. Overall, the findings suggest that investing 31% more capital in fecal sample collection prevents an underestimation of prevalence by about 21%, and hence improves the diagnostic sensitivity of the Kato-Katz method. Especially in areas with light-intensity infections of soil-transmitted helminths and limited public health resources, more accurate epidemiological surveillance using multiple fecal samples will critically inform decisions regarding infection control and prevention.

  4. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  5. Effects of brief time delays on matching-to-sample abilities in capuchin monkeys (Sapajus spp.).

    Science.gov (United States)

    Truppa, Valentina; De Simone, Diego Antonio; Piano Mortari, Eva; De Lillo, Carlo

    2014-09-01

    Traditionally, studies of delayed matching-to-sample (DMTS) tasks in nonhuman species have focused on the assessment of the limits of the retrieval of information stored in short- and long-term memory systems. However, it is still unclear if visual recognition in these tasks is affected by very brief delay intervals, which are typically used to study rapidly decaying types of visual memory. This study aimed at evaluating if tufted capuchin monkeys' ability to recognise visual stimuli in a DMTS task is affected by (i) the disappearance of the sample stimulus and (ii) the introduction of delay intervals (0.5, 1.0, 2.0 and 3.0s) between the disappearance of the sample and the presentation of the comparison stimuli. The results demonstrated that the simple disappearance of the sample and the introduction of a delay of 0.5s did not affect capuchins' performance either in terms of accuracy or response time. A delay interval of 1.0s produced a significant increase in response time but still did not affect recognition accuracy. By contrast, delays of 2.0 and 3.0s determined a significant increase in response time and a reduction in recognition accuracy. These findings indicate the existence in capuchin monkeys of processes enabling a very accurate retention of stimulus features within time frames comparable to those reported for humans' sensory memory (0.5-1.0s). The extent to which such processes can be considered analogous to the sensory memory processes observed in human visual cognition is discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Mitigation of Beat Noise in Time Wavelength Optical Code-Division Multiple-Access Systems

    Science.gov (United States)

    Bazan, Taher M.; Harle, David; Andonovic, Ivan

    2006-11-01

    This paper presents an analysis of two methods for enhancing the performance of two-dimensional time wavelength Optical code-division multiple-access systems by mitigating the effects of beat noise. The first methodology makes use of an optical hard limiter (OHL) in the receiver prior to the optical correlator; a general formula for the error probability as a function of crosstalk level for systems adopting OHLs is given, and the implications of the OHL's nonideal transfer characteristics are then examined. The second approach adopts pulse position modulation, and system performance is estimated and compared to that associated with on off keying.

  7. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  8. Software engineering the mixed model for genome-wide association studies on large samples

    Science.gov (United States)

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample siz...

  9. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  10. Multiple coherence resonances and synchronization transitions by time delay in adaptive scale-free neuronal networks with spike-timing-dependent plasticity

    International Nuclear Information System (INIS)

    Xie, Huijuan; Gong, Yubing

    2017-01-01

    In this paper, we numerically study the effect of spike-timing-dependent plasticity (STDP) on multiple coherence resonances (MCR) and synchronization transitions (ST) induced by time delay in adaptive scale-free Hodgkin–Huxley neuronal networks. It is found that STDP has a big influence on MCR and ST induced by time delay and on the effect of network average degree on the MCR and ST. MCR is enhanced or suppressed as the adjusting rate A p of STDP decreases or increases, and there is optimal A p by which ST becomes strongest. As network average degree 〈k〉 increases, ST is enhanced and there is optimal 〈k〉 at which MCR becomes strongest. Moreover, for a larger A p value, ST is enhanced more rapidly with increasing 〈k〉 and the optimal 〈k〉 for MCR increases. These results show that STDP can either enhance or suppress MCR, and there is optimal STDP that can most strongly enhance ST induced by time delay in the adaptive neuronal networks. These findings could find potential implication for the information processing and transmission in neural systems.

  11. Laser-induced breakdown spectroscopy for the real-time analysis of mixed waste samples containing Sr

    International Nuclear Information System (INIS)

    Barefield, J.E. II; Koskelo, A.C.; Multari, R.A.; Cremers, D.A.; Gamble, T.K.; Han, C.Y.

    1995-01-01

    In this report, the use of Laser-induced breakdown spectroscopy to analyze mixed waste samples containing Sr is discussed. The mixed waste samples investigated include vitrified waste glass and contaminated soil. Compared to traditional analysis techniques, the laser-based method is fast (i.e., analysis times on the order of minutes) and essentially waste free since little or no sample preparation is required. Detection limits on the order of pmm Sr were determined. Detection limits obtained using a fiber optic cable to deliver laser pulses to soil samples containing Cr, Zr, Pb, Be, Cu, and Ni will also be discussed

  12. Applications of Fast Truncated Multiplication in Cryptography

    Directory of Open Access Journals (Sweden)

    Laszlo Hars

    2006-12-01

    Full Text Available Truncated multiplications compute truncated products, contiguous subsequences of the digits of integer products. For an n-digit multiplication algorithm of time complexity O(nα, with 1<α≤2, there is a truncated multiplication algorithm, which is constant times faster when computing a short enough truncated product. Applying these fast truncated multiplications, several cryptographic long integer arithmetic algorithms are improved, including integer reciprocals, divisions, Barrett and Montgomery multiplications, 2n-digit modular multiplication on hardware for n-digit half products. For example, Montgomery multiplication is performed in 2.6 Karatsuba multiplication time.

  13. Development of a real-time PCR to detect Demodex canis DNA in different tissue samples.

    Science.gov (United States)

    Ravera, Ivan; Altet, Laura; Francino, Olga; Bardagí, Mar; Sánchez, Armand; Ferrer, Lluís

    2011-02-01

    The present study reports the development of a real-time polymerase chain reaction (PCR) to detect Demodex canis DNA on different tissue samples. The technique amplifies a 166 bp of D. canis chitin synthase gene (AB 080667) and it has been successfully tested on hairs extracted with their roots and on formalin-fixed paraffin embedded skin biopsies. The real-time PCR amplified on the hairs of all 14 dogs with a firm diagnosis of demodicosis and consistently failed to amplify on negative controls. Eleven of 12 skin biopsies with a morphologic diagnosis of canine demodicosis were also positive. Sampling hairs on two skin points (lateral face and interdigital skin), D. canis DNA was detected on nine of 51 healthy dogs (17.6%) a much higher percentage than previously reported with microscopic studies. Furthermore, it is foreseen that if the number of samples were increased, the percentage of positive dogs would probably also grow. Moreover, in four of the six dogs with demodicosis, the samples taken from non-lesioned skin were positive. This finding, if confirmed in further studies, suggests that demodicosis is a generalized phenomenon in canine skin, due to proliferation of local mite populations, even though macroscopic lesions only appear in certain areas. The real-time PCR technique to detect D. canis DNA described in this work is a useful tool to advance our understanding of canine demodicosis.

  14. Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.

    Science.gov (United States)

    Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby

    2018-02-06

    Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. The influence of multiple goals on driving behavior : The case of safety, time saving, and fuel saving

    NARCIS (Netherlands)

    Dogan, Ebru; Steg, Linda; Delhomme, Patricia

    Due to the innate complexity of the task drivers have to manage multiple goals while driving and the importance of certain goals may vary over time leading to priority being given to different goals depending on the circumstances. This study aimed to investigate drivers' behavioral regulation while

  16. Multiple time-scale optimization scheduling for islanded microgrids including PV, wind turbine, diesel generator and batteries

    DEFF Research Database (Denmark)

    Xiao, Zhao xia; Nan, Jiakai; Guerrero, Josep M.

    2017-01-01

    A multiple time-scale optimization scheduling including day ahead and short time for an islanded microgrid is presented. In this paper, the microgrid under study includes photovoltaics (PV), wind turbine (WT), diesel generator (DG), batteries, and shiftable loads. The study considers the maximum...... efficiency operation area for the diesel engine and the cost of the battery charge/discharge cycle losses. The day-ahead generation scheduling takes into account the minimum operational cost and the maximum load satisfaction as the objective function. Short-term optimal dispatch is based on minimizing...

  17. Phase unwinding for dictionary compression with multiple channel transmission in magnetic resonance fingerprinting.

    Science.gov (United States)

    Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A

    2018-06-01

    Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Choice of Sample Split in Out-of-Sample Forecast Evaluation

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Timmermann, Allan

    , while conversely the power of forecast evaluation tests is strongest with long out-of-sample periods. To deal with size distortions, we propose a test statistic that is robust to the effect of considering multiple sample split points. Empirical applications to predictabil- ity of stock returns......Out-of-sample tests of forecast performance depend on how a given data set is split into estimation and evaluation periods, yet no guidance exists on how to choose the split point. Empirical forecast evaluation results can therefore be difficult to interpret, particularly when several values...... and inflation demonstrate that out-of-sample forecast evaluation results can critically depend on how the sample split is determined....

  19. Multiple-Relaxation-Time Lattice Boltzmann Approach to Richtmyer-Meshkov Instability

    International Nuclear Information System (INIS)

    Chen Feng; Li Yingjun; Xu Aiguo; Zhang Guangcai

    2011-01-01

    The aims of the present paper are twofold. At first, we further study the Multiple-Relaxation-Time (MRT) Lattice Boltzmann (LB) model proposed in [Europhys. Lett. 90 (2010) 54003]. We discuss the reason why the Gram-Schmidt orthogonalization procedure is not needed in the construction of transformation matrix M; point out a reason why the Kataoka-Tsutahara model [Phys. Rev. E 69 (2004) 035701 (R)] is only valid in subsonic flows. The von Neumann stability analysis is performed. Secondly, we carry out a preliminary quantitative study on the Richtmyer-Meshkov instability using the proposed MRT LB model. When a shock wave travels from a light medium to a heavy one, the simulated growth rate is in qualitative agreement with the perturbation model by Zhang-Sohn. It is about half of the predicted value by the impulsive model and is closer to the experimental result. When the shock wave travels from a heavy medium to a light one, our simulation results are also consistent with physical analysis. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  20. Pumping time required to obtain tube well water samples with aquifer characteristic radon concentrations

    International Nuclear Information System (INIS)

    Ricardo, Carla Pereira; Oliveira, Arno Heeren de

    2011-01-01

    Radon is an inert noble gas, which comes from the natural radioactive decay of uranium and thorium in soil, rock and water. Radon isotopes emanated from radium-bearing grains of a rock or soil are released into the pore space. Radon that reaches the pore space is partitioned between the gaseous and aqueous phases. Thus, the groundwater presents a radon signature from the rock that is characteristic of the aquifer. The characteristic radon concentration of an aquifer, which is mainly related to the emanation, is also influenced by the degree of subsurface degassing, especially in the vicinity of a tube well, where the radon concentration is strongly reduced. Looking for the required pumping time to take a tube well water sample that presents the characteristic radon concentration of the aquifer, an experiment was conducted in an 80 m deep tube well. In this experiment, after twenty-four hours without extraction, water samples were collected periodically, about ten minutes intervals, during two hours of pumping time. The radon concentrations of the samples were determined by using the RAD7 Electronic Radon Detector from Durridge Company, a solid state alpha spectrometric detector. It was realized that the necessary time to reach the maximum radon concentration, that means the characteristic radon concentration of the aquifer, is about sixty minutes. (author)