WorldWideScience

Sample records for sample processing time

  1. Sampling and Timing: A Task for the Environmetal Process

    NARCIS (Netherlands)

    Hilderink, G.H.; Broenink, Johannes F.

    2003-01-01

    Sampling and timing is considered a responsibility of the environment of controller software. In this paper we will illustrate a concept whereby an environmental process and multi-way events play an important role in applying timing for untimed CSP software architectures. We use this timing concept

  2. Real-time recursive hyperspectral sample and band processing algorithm architecture and implementation

    CERN Document Server

    Chang, Chein-I

    2017-01-01

    This book explores recursive architectures in designing progressive hyperspectral imaging algorithms. In particular, it makes progressive imaging algorithms recursive by introducing the concept of Kalman filtering in algorithm design so that hyperspectral imagery can be processed not only progressively sample by sample or band by band but also recursively via recursive equations. This book can be considered a companion book of author’s books, Real-Time Progressive Hyperspectral Image Processing, published by Springer in 2016. Explores recursive structures in algorithm architecture Implements algorithmic recursive architecture in conjunction with progressive sample and band processing Derives Recursive Hyperspectral Sample Processing (RHSP) techniques according to Band-Interleaved Sample/Pixel (BIS/BIP) acquisition format Develops Recursive Hyperspectral Band Processing (RHBP) techniques according to Band SeQuential (BSQ) acquisition format for hyperspectral data.

  3. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  4. Study on auto-plating process time versus recovery for polonium, Po-210 in environmental sample

    International Nuclear Information System (INIS)

    Jalal Sharib; Zaharudin Ahmad; Abdul Kadir Ishak; Norfaizal Mohamed; Ahmad Sanadi Abu Bakar; Yii Mei Wo; Kamarozaman Ishak; Siti Aminah Yusoff

    2008-08-01

    This study was carried out to evaluate time effectiveness and recovery 16 samples of 4 Kuala Muda stations during auto-plating process procedures for determination Polonium, Po 210 activity concentration in environmental sample. The study was performed using Kuala Muda sediment as sample in the same methodology. The auto-plating process runs for 4, 12, 24 and 30 hours on a silver disc for 4 samples each station, and then counted for one (1) day using an alpha spectrometry counting system. The objectives for this study is to justify on time duration for auto-plating process effecting a chemical yield of Po-209.The results showed recovery are increasing versus time and constantly at 24 hour auto-plating. Its mean, 24 hour is an optimum time for auto-plating process for determination of Polonium, Po 210 activity concentration in environmental sample. (Author)

  5. Sampling and sample processing in pesticide residue analysis.

    Science.gov (United States)

    Lehotay, Steven J; Cook, Jo Marie

    2015-05-13

    Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.

  6. A novel heterogeneous training sample selection method on space-time adaptive processing

    Science.gov (United States)

    Wang, Qiang; Zhang, Yongshun; Guo, Yiduo

    2018-04-01

    The performance of ground target detection about space-time adaptive processing (STAP) decreases when non-homogeneity of clutter power is caused because of training samples contaminated by target-like signals. In order to solve this problem, a novel nonhomogeneous training sample selection method based on sample similarity is proposed, which converts the training sample selection into a convex optimization problem. Firstly, the existing deficiencies on the sample selection using generalized inner product (GIP) are analyzed. Secondly, the similarities of different training samples are obtained by calculating mean-hausdorff distance so as to reject the contaminated training samples. Thirdly, cell under test (CUT) and the residual training samples are projected into the orthogonal subspace of the target in the CUT, and mean-hausdorff distances between the projected CUT and training samples are calculated. Fourthly, the distances are sorted in order of value and the training samples which have the bigger value are selective preference to realize the reduced-dimension. Finally, simulation results with Mountain-Top data verify the effectiveness of the proposed method.

  7. Sample and data management process description

    International Nuclear Information System (INIS)

    Kessner, J.H.

    2000-01-01

    The sample and data management process was initiated in 1994 as a result of a process improvement workshop. The purpose of the workshop was to develop a sample and data management process that would reduce cycle time and costs, simplify systems and procedures, and improve customer satisfaction for sampling, analytical services, and data management activities

  8. Sampling returns for realized variance calculations: tick time or transaction time?

    NARCIS (Netherlands)

    Griffin, J.E.; Oomen, R.C.A.

    2008-01-01

    This article introduces a new model for transaction prices in the presence of market microstructure noise in order to study the properties of the price process on two different time scales, namely, transaction time where prices are sampled with every transaction and tick time where prices are

  9. The relative importance of perceptual and memory sampling processes in determining the time course of absolute identification.

    Science.gov (United States)

    Guest, Duncan; Kent, Christopher; Adelman, James S

    2018-04-01

    In absolute identification, the extended generalized context model (EGCM; Kent & Lamberts, 2005, 2016) proposes that perceptual processing determines systematic response time (RT) variability; all other models of RT emphasize response selection processes. In the EGCM-RT the bow effect in RTs (longer responses for stimuli in the middle of the range) occurs because these middle stimuli are less isolated, and as perceptual information is accumulated, the evidence supporting a correct response grows more slowly than for stimuli at the ends of the range. More perceptual information is therefore accumulated in order to increase certainty in response for middle stimuli, lengthening RT. According to the model reducing perceptual sampling time should reduce the size of the bow effect in RT. We tested this hypothesis in 2 pitch identification experiments. Experiment 1 found no effect of stimulus duration on the size of the RT bow. Experiment 2 used multiple short stimulus durations as well as manipulating set size and stimulus spacing. Contrary to EGCM-RT predictions, the bow effect on RTs was large for even very short durations. A new version of the EGCM-RT could only capture this, alongside the effect of stimulus duration on accuracy, by including both a perceptual and a memory sampling process. A modified version of the selective attention, mapping, and ballistic accumulator model (Brown, Marley, Donkin, & Heathcote, 2008) could also capture the data, by assuming psychophysical noise diminishes with increased exposure duration. This modeling suggests systematic variability in RT in absolute identification is largely determined by memory sampling and response selection processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. ISOLOK VALVE ACCEPTANCE TESTING FOR DWPF SME SAMPLING PROCESS

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, T.; Hera, K.; Coleman, C.; Jones, M.; Wiedenman, B.

    2011-12-05

    Evaluation of the Defense Waste Processing Facility (DWPF) Chemical Process Cell (CPC) cycle time identified several opportunities to improve the CPC processing time. Of the opportunities, a focus area related to optimizing the equipment and efficiency of the sample turnaround time for DWPF Analytical Laboratory was identified. The Mechanical Systems & Custom Equipment Development (MS&CED) Section of the Savannah River National Laboratory (SRNL) evaluated the possibility of using an Isolok{reg_sign} sampling valve as an alternative to the Hydragard{reg_sign} valve for taking process samples. Previous viability testing was conducted with favorable results using the Isolok sampler and reported in SRNL-STI-2010-00749 (1). This task has the potential to improve operability, reduce maintenance time and decrease CPC cycle time. This report summarizes the results from acceptance testing which was requested in Task Technical Request (TTR) HLW-DWPF-TTR-2010-0036 (2) and which was conducted as outlined in Task Technical and Quality Assurance Plan (TTQAP) SRNL-RP-2011-00145 (3). The Isolok to be tested is the same model which was tested, qualified, and installed in the Sludge Receipt Adjustment Tank (SRAT) sample system. RW-0333P QA requirements apply to this task. This task was to qualify the Isolok sampler for use in the DWPF Slurry Mix Evaporator (SME) sampling process. The Hydragard, which is the current baseline sampling method, was used for comparison to the Isolok sampling data. The Isolok sampler is an air powered grab sampler used to 'pull' a sample volume from a process line. The operation of the sampler is shown in Figure 1. The image on the left shows the Isolok's spool extended into the process line and the image on the right shows the sampler retracted and then dispensing the liquid into the sampling container. To determine tank homogeneity, a Coliwasa sampler was used to grab samples at a high and low location within the mixing tank. Data from

  11. Sample processing, protocol, and statistical analysis of the time-of-flight secondary ion mass spectrometry (ToF-SIMS) of protein, cell, and tissue samples.

    Science.gov (United States)

    Barreto, Goncalo; Soininen, Antti; Sillat, Tarvo; Konttinen, Yrjö T; Kaivosoja, Emilia

    2014-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is increasingly being used in analysis of biological samples. For example, it has been applied to distinguish healthy and osteoarthritic human cartilage. This chapter discusses ToF-SIMS principle and instrumentation including the three modes of analysis in ToF-SIMS. ToF-SIMS sets certain requirements for the samples to be analyzed; for example, the samples have to be vacuum compatible. Accordingly, sample processing steps for different biological samples, i.e., proteins, cells, frozen and paraffin-embedded tissues and extracellular matrix for the ToF-SIMS are presented. Multivariate analysis of the ToF-SIMS data and the necessary data preprocessing steps (peak selection, data normalization, mean-centering, and scaling and transformation) are discussed in this chapter.

  12. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  13. Adaptive Sampling of Time Series During Remote Exploration

    Science.gov (United States)

    Thompson, David R.

    2012-01-01

    This work deals with the challenge of online adaptive data collection in a time series. A remote sensor or explorer agent adapts its rate of data collection in order to track anomalous events while obeying constraints on time and power. This problem is challenging because the agent has limited visibility (all its datapoints lie in the past) and limited control (it can only decide when to collect its next datapoint). This problem is treated from an information-theoretic perspective, fitting a probabilistic model to collected data and optimizing the future sampling strategy to maximize information gain. The performance characteristics of stationary and nonstationary Gaussian process models are compared. Self-throttling sensors could benefit environmental sensor networks and monitoring as well as robotic exploration. Explorer agents can improve performance by adjusting their data collection rate, preserving scarce power or bandwidth resources during uninteresting times while fully covering anomalous events of interest. For example, a remote earthquake sensor could conserve power by limiting its measurements during normal conditions and increasing its cadence during rare earthquake events. A similar capability could improve sensor platforms traversing a fixed trajectory, such as an exploration rover transect or a deep space flyby. These agents can adapt observation times to improve sample coverage during moments of rapid change. An adaptive sampling approach couples sensor autonomy, instrument interpretation, and sampling. The challenge is addressed as an active learning problem, which already has extensive theoretical treatment in the statistics and machine learning literature. A statistical Gaussian process (GP) model is employed to guide sample decisions that maximize information gain. Nonsta tion - ary (e.g., time-varying) covariance relationships permit the system to represent and track local anomalies, in contrast with current GP approaches. Most common GP models

  14. Stationary and related stochastic processes sample function properties and their applications

    CERN Document Server

    Cramér, Harald

    2004-01-01

    This graduate-level text offers a comprehensive account of the general theory of stationary processes, with special emphasis on the properties of sample functions. Assuming a familiarity with the basic features of modern probability theory, the text develops the foundations of the general theory of stochastic processes, examines processes with a continuous-time parameter, and applies the general theory to procedures key to the study of stationary processes. Additional topics include analytic properties of the sample functions and the problem of time distribution of the intersections between a

  15. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  16. Non-Contact Conductivity Measurement for Automated Sample Processing Systems

    Science.gov (United States)

    Beegle, Luther W.; Kirby, James P.

    2012-01-01

    A new method has been developed for monitoring and control of automated sample processing and preparation especially focusing on desalting of samples before analytical analysis (described in more detail in Automated Desalting Apparatus, (NPO-45428), NASA Tech Briefs, Vol. 34, No. 8 (August 2010), page 44). The use of non-contact conductivity probes, one at the inlet and one at the outlet of the solid phase sample preparation media, allows monitoring of the process, and acts as a trigger for the start of the next step in the sequence (see figure). At each step of the muti-step process, the system is flushed with low-conductivity water, which sets the system back to an overall low-conductivity state. This measurement then triggers the next stage of sample processing protocols, and greatly minimizes use of consumables. In the case of amino acid sample preparation for desalting, the conductivity measurement will define three key conditions for the sample preparation process. First, when the system is neutralized (low conductivity, by washing with excess de-ionized water); second, when the system is acidified, by washing with a strong acid (high conductivity); and third, when the system is at a basic condition of high pH (high conductivity). Taken together, this non-contact conductivity measurement for monitoring sample preparation will not only facilitate automation of the sample preparation and processing, but will also act as a way to optimize the operational time and use of consumables

  17. Robust nonhomogeneous training samples detection method for space-time adaptive processing radar using sparse-recovery with knowledge-aided

    Science.gov (United States)

    Li, Zhihui; Liu, Hanwei; Zhang, Yongshun; Guo, Yiduo

    2017-10-01

    The performance of space-time adaptive processing (STAP) may degrade significantly when some of the training samples are contaminated by the signal-like components (outliers) in nonhomogeneous clutter environments. To remove the training samples contaminated by outliers in nonhomogeneous clutter environments, a robust nonhomogeneous training samples detection method using the sparse-recovery (SR) with knowledge-aided (KA) is proposed. First, the reduced-dimension (RD) overcomplete spatial-temporal steering dictionary is designed with the prior knowledge of system parameters and the possible target region. Then, the clutter covariance matrix (CCM) of cell under test is efficiently estimated using a modified focal underdetermined system solver (FOCUSS) algorithm, where a RD overcomplete spatial-temporal steering dictionary is applied. Third, the proposed statistics are formed by combining the estimated CCM with the generalized inner products (GIP) method, and the contaminated training samples can be detected and removed. Finally, several simulation results validate the effectiveness of the proposed KA-SR-GIP method.

  18. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Qin Guo-jie

    2014-08-01

    Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

  19. Sample Handling and Processing on Mars for Future Astrobiology Missions

    Science.gov (United States)

    Beegle, Luther; Kirby, James P.; Fisher, Anita; Hodyss, Robert; Saltzman, Alison; Soto, Juancarlos; Lasnik, James; Roark, Shane

    2011-01-01

    In most analytical investigations, there is a need to process complex field samples for the unique detection of analytes especially when detecting low concentration organic molecules that may identify extraterrestrial life. Sample processing for analytical instruments is time, resource and manpower consuming in terrestrial laboratories. Every step in this laborious process will have to be automated for in situ life detection. We have developed, and are currently demonstrating, an automated wet chemistry preparation system that can operate autonomously on Earth and is designed to operate under Martian ambient conditions. This will enable a complete wet chemistry laboratory as part of future missions. Our system, namely the Automated Sample Processing System (ASPS) receives fines, extracts organics through solvent extraction, processes the extract by removing non-organic soluble species and delivers sample to multiple instruments for analysis (including for non-organic soluble species).

  20. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.

    Science.gov (United States)

    Mobli, Mehdi; Hoch, Jeffrey C

    2014-11-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Pierre Gy's sampling theory and sampling practice heterogeneity, sampling correctness, and statistical process control

    CERN Document Server

    Pitard, Francis F

    1993-01-01

    Pierre Gy's Sampling Theory and Sampling Practice, Second Edition is a concise, step-by-step guide for process variability management and methods. Updated and expanded, this new edition provides a comprehensive study of heterogeneity, covering the basic principles of sampling theory and its various applications. It presents many practical examples to allow readers to select appropriate sampling protocols and assess the validity of sampling protocols from others. The variability of dynamic process streams using variography is discussed to help bridge sampling theory with statistical process control. Many descriptions of good sampling devices, as well as descriptions of poor ones, are featured to educate readers on what to look for when purchasing sampling systems. The book uses its accessible, tutorial style to focus on professional selection and use of methods. The book will be a valuable guide for mineral processing engineers; metallurgists; geologists; miners; chemists; environmental scientists; and practit...

  2. Maxima estimate of non gaussian process from observation of time history samples

    International Nuclear Information System (INIS)

    Borsoi, L.

    1987-01-01

    The problem constitutes a formidable task but is essential for industrial applications: extreme value design, fatigue analysis, etc. Even for the linear Gaussian case, the process ergodicity does not prevent the observation duration to be long enough to make reliable estimates. As well known, this duration is closely related to the process autocorrelation. A subterfuge, which distorts a little the problem, consists in considering periodic random process and in adjusting the observation duration to a complete period. In the nonlinear case, the stated problem is as much important as time history simulation is presently the only practicable way for analysing structures. Thus it is always interesting to adjust a tractable model to rough time history observations. In some cases this can be done with a Gumble-Poisson model. Then the difficulty is to make reliable estimates of the parameters involved in the model. Unfortunately it seems that even the use of sophisticated Bayesian method does not permit to reduce as wanted the necessary observation duration. One of the difficulties lies in process ergodicity which is often assumed to be based on physical considerations but which is not always rigorously stated. An other difficulty is the confusion between hidden informations - which can be extracted - and missing informations - which cannot be extracted. Finally it must be recalled that the obligation of considering time histories long enough is not always embarrassing due to the current computer cost reduction. (orig./HP)

  3. Representative process sampling - in practice

    DEFF Research Database (Denmark)

    Esbensen, Kim; Friis-Pedersen, Hans Henrik; Julius, Lars Petersen

    2007-01-01

    Didactic data sets representing a range of real-world processes are used to illustrate "how to do" representative process sampling and process characterisation. The selected process data lead to diverse variogram expressions with different systematics (no range vs. important ranges; trends and....../or periodicity; different nugget effects and process variations ranging from less than one lag to full variogram lag). Variogram data analysis leads to a fundamental decomposition into 0-D sampling vs. 1-D process variances, based on the three principal variogram parameters: range, sill and nugget effect...

  4. Generalization bounds of ERM-based learning processes for continuous-time Markov chains.

    Science.gov (United States)

    Zhang, Chao; Tao, Dacheng

    2012-12-01

    Many existing results on statistical learning theory are based on the assumption that samples are independently and identically distributed (i.i.d.). However, the assumption of i.i.d. samples is not suitable for practical application to problems in which samples are time dependent. In this paper, we are mainly concerned with the empirical risk minimization (ERM) based learning process for time-dependent samples drawn from a continuous-time Markov chain. This learning process covers many kinds of practical applications, e.g., the prediction for a time series and the estimation of channel state information. Thus, it is significant to study its theoretical properties including the generalization bound, the asymptotic convergence, and the rate of convergence. It is noteworthy that, since samples are time dependent in this learning process, the concerns of this paper cannot (at least straightforwardly) be addressed by existing methods developed under the sample i.i.d. assumption. We first develop a deviation inequality for a sequence of time-dependent samples drawn from a continuous-time Markov chain and present a symmetrization inequality for such a sequence. By using the resultant deviation inequality and symmetrization inequality, we then obtain the generalization bounds of the ERM-based learning process for time-dependent samples drawn from a continuous-time Markov chain. Finally, based on the resultant generalization bounds, we analyze the asymptotic convergence and the rate of convergence of the learning process.

  5. Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series

    DEFF Research Database (Denmark)

    Davis, Richard A.; Mikosch, Thomas Valentin; Pfaffel, Olivier

    2016-01-01

    In this paper we give an asymptotic theory for the eigenvalues of the sample covariance matrix of a multivariate time series. The time series constitutes a linear process across time and between components. The input noise of the linear process has regularly varying tails with index α∈(0,4) in...... particular, the time series has infinite fourth moment. We derive the limiting behavior for the largest eigenvalues of the sample covariance matrix and show point process convergence of the normalized eigenvalues. The limiting process has an explicit form involving points of a Poisson process and eigenvalues...... of a non-negative definite matrix. Based on this convergence we derive limit theory for a host of other continuous functionals of the eigenvalues, including the joint convergence of the largest eigenvalues, the joint convergence of the largest eigenvalue and the trace of the sample covariance matrix...

  6. Radar Doppler Processing with Nonuniform Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Conventional signal processing to estimate radar Doppler frequency often assumes uniform pulse/sample spacing. This is for the convenience of t he processing. More recent performance enhancements in processor capability allow optimally processing nonuniform pulse/sample spacing, thereby overcoming some of the baggage that attends uniform sampling, such as Doppler ambiguity and SNR losses due to sidelobe control measures.

  7. Practical reporting times for environmental samples

    International Nuclear Information System (INIS)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4 degrees C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions

  8. Practical reporting times for environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Bayne, C.K.; Schmoyer, D.D.; Jenkins, R.A.

    1993-02-01

    Preanalytical holding times for environmental samples are specified because chemical and physical characteristics may change between sampling and chemical analysis. For example, the Federal Register prescribes a preanalytical holding time of 14 days for volatile organic compounds in soil stored at 4{degrees}C. The American Society for Testing Materials (ASTM) uses a more technical definition that the preanalytical holding time is the day when the analyte concentration for an environmental sample falls below the lower 99% confidence interval on the analyte concentration at day zero. This study reviews various holding time definitions and suggest a new preanalytical holding time approach using acceptable error rates for measuring an environmental analyte. This practical reporting time (PRT) approach has been applied to nineteen volatile organic compounds and four explosives in three environmental soil samples. A PRT nomograph of error rates has been developed to estimate the consequences of missing a preanalytical holding time. This nomograph can be applied to a large class of analytes with concentrations that decay linearly or exponentially with time regardless of sample matrices and storage conditions.

  9. Method for Hot Real-Time Sampling of Pyrolysis Vapors

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.

  10. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    Science.gov (United States)

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical

  11. Method for sampling and analysis of volatile biomarkers in process gas from aerobic digestion of poultry carcasses using time-weighted average SPME and GC-MS.

    Science.gov (United States)

    Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J

    2017-10-01

    A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Capillary absorption spectrometer and process for isotopic analysis of small samples

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2018-04-24

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The process also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  13. Algae viability over time in a ballast water sample

    Science.gov (United States)

    Gollasch, Stephan; David, Matej

    2018-03-01

    The biology of vessels' ballast water needs to be analysed for several reasons, one of these being performance tests of ballast water management systems. This analysis includes a viability assessment of phytoplankton. To overcome logistical problems to get algae sample processing gear on board of a vessel to document algae viability, samples may be transported to land-based laboratories. Concerns were raised how the storage conditions of the sample may impact algae viability over time and what the most appropriate storage conditions were. Here we answer these questions with a long-term algae viability study with daily sample analysis using Pulse-Amplitude Modulated (PAM) fluorometry. The sample was analysed over 79 days. We tested different storage conditions: fridge and room temperature with and without light. It seems that during the first two weeks of the experiment the viability remains almost unchanged with a slight downwards trend. In the continuing period, before the sample was split, a slightly stronger downwards viability trend was observed, which occurred at a similar rate towards the end of the experiment. After the sample was split, the strongest viability reduction was measured for the sample stored without light at room temperature. We concluded that the storage conditions, especially regarding temperature and light exposure, have a stronger impact on algae viability compared to the storage duration and that inappropriate storage conditions reduce algal viability. A sample storage time of up to two weeks in a dark and cool environment has little influence on the organism viability. This indicates that a two week time duration between sample taking on board a vessel and the viability measurement in a land-based laboratory may not be very critical.

  14. Effects of holding time and measurement error on culturing Legionella in environmental water samples.

    Science.gov (United States)

    Flanders, W Dana; Kirkland, Kimberly H; Shelton, Brian G

    2014-10-01

    Outbreaks of Legionnaires' disease require environmental testing of water samples from potentially implicated building water systems to identify the source of exposure. A previous study reports a large impact on Legionella sample results due to shipping and delays in sample processing. Specifically, this same study, without accounting for measurement error, reports more than half of shipped samples tested had Legionella levels that arbitrarily changed up or down by one or more logs, and the authors attribute this result to shipping time. Accordingly, we conducted a study to determine the effects of sample holding/shipping time on Legionella sample results while taking into account measurement error, which has previously not been addressed. We analyzed 159 samples, each split into 16 aliquots, of which one-half (8) were processed promptly after collection. The remaining half (8) were processed the following day to assess impact of holding/shipping time. A total of 2544 samples were analyzed including replicates. After accounting for inherent measurement error, we found that the effect of holding time on observed Legionella counts was small and should have no practical impact on interpretation of results. Holding samples increased the root mean squared error by only about 3-8%. Notably, for only one of 159 samples, did the average of the 8 replicate counts change by 1 log. Thus, our findings do not support the hypothesis of frequent, significant (≥= 1 log10 unit) Legionella colony count changes due to holding. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  16. Continuous time modelling with individually varying time intervals for oscillating and non-oscillating processes.

    Science.gov (United States)

    Voelkle, Manuel C; Oud, Johan H L

    2013-02-01

    When designing longitudinal studies, researchers often aim at equal intervals. In practice, however, this goal is hardly ever met, with different time intervals between assessment waves and different time intervals between individuals being more the rule than the exception. One of the reasons for the introduction of continuous time models by means of structural equation modelling has been to deal with irregularly spaced assessment waves (e.g., Oud & Delsing, 2010). In the present paper we extend the approach to individually varying time intervals for oscillating and non-oscillating processes. In addition, we show not only that equal intervals are unnecessary but also that it can be advantageous to use unequal sampling intervals, in particular when the sampling rate is low. Two examples are provided to support our arguments. In the first example we compare a continuous time model of a bivariate coupled process with varying time intervals to a standard discrete time model to illustrate the importance of accounting for the exact time intervals. In the second example the effect of different sampling intervals on estimating a damped linear oscillator is investigated by means of a Monte Carlo simulation. We conclude that it is important to account for individually varying time intervals, and encourage researchers to conceive of longitudinal studies with different time intervals within and between individuals as an opportunity rather than a problem. © 2012 The British Psychological Society.

  17. Method for Hot Real-Time Sampling of Gasification Products

    Energy Technology Data Exchange (ETDEWEB)

    Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-29

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beam Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.

  18. Advancement of Solidification Processing Technology Through Real Time X-Ray Transmission Microscopy: Sample Preparation

    Science.gov (United States)

    Stefanescu, D. M.; Curreri, P. A.

    1996-01-01

    Two types of samples were prepared for the real time X-ray transmission microscopy (XTM) characterization. In the first series directional solidification experiments were carried out to evaluate the critical velocity of engulfment of zirconia particles in the Al and Al-Ni eutectic matrix under ground (l-g) conditions. The particle distribution in the samples was recorded on video before and after the samples were directionally solidified. In the second series samples of the above two type of composites were prepared for directional solidification runs to be carried out on the Advanced Gradient Heating Facility (AGHF) aboard the space shuttle during the LMS mission in June 1996. X-ray microscopy proved to be an invaluable tool for characterizing the particle distribution in the metal matrix samples. This kind of analysis helped in determining accurately the critical velocity of engulfment of ceramic particles by the melt interface in the opaque metal matrix composites. The quality of the cast samples with respect to porosity and instrumented thermocouple sheath breakage or shift could be easily viewed and thus helped in selecting samples for the space shuttle experiments. Summarizing the merits of this technique it can be stated that this technique enabled the use of cast metal matrix composite samples since the particle location was known prior to the experiment.

  19. Challenging genosensors in food samples: The case of gluten determination in highly processed samples.

    Science.gov (United States)

    Martín-Fernández, Begoña; de-los-Santos-Álvarez, Noemí; Martín-Clemente, Juan Pedro; Lobo-Castañón, María Jesús; López-Ruiz, Beatriz

    2016-01-01

    Electrochemical genosensors have undergone an enormous development in the last decades, but only very few have achieved a quantification of target content in highly processed food samples. The detection of allergens, and particularly gluten, is challenging because legislation establishes a threshold of 20 ppm for labeling as gluten-free but most genosensors expresses the results in DNA concentration or DNA copies. This paper describes the first attempt to correlate the genosensor response and the wheat content in real samples, even in the case of highly processed food samples. A sandwich-based format, comprising a capture probe immobilized onto the screen-printed gold electrode, and a signaling probe functionalized with fluorescein isothiocyanate (FITC), both hybridizing with the target was used. The hybridization event was electrochemically monitored by adding an anti-FITC peroxidase (antiFITC-HRP) and its substrate, tetramethylbenzidine. Binary model mixtures, as a reference material, and real samples have been analyzed. DNA from food was extracted and a fragment encoding the immunodominant peptide of α2-gliadin amplified by a tailored PCR. The sensor was able to selectively detect toxic cereals for celiac patients, such as different varieties of wheat, barley, rye and oats, from non-toxic plants. As low as 0.001% (10 mg/kg) of wheat flour in an inert matrix was reliably detected, which directly compete with the current method of choice for DNA detection, the real-time PCR. A good correlation with the official immunoassay was found in highly processed food samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  1. Low-sampling-rate ultra-wideband digital receiver using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig; Al-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquist rate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times below the Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presented for two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in different scenarios, with reasonable bit error rates obtained in most of the cases.

  2. Detectability of Granger causality for subsampled continuous-time neurophysiological processes.

    Science.gov (United States)

    Barnett, Lionel; Seth, Anil K

    2017-01-01

    Granger causality is well established within the neurosciences for inference of directed functional connectivity from neurophysiological data. These data usually consist of time series which subsample a continuous-time biophysiological process. While it is well known that subsampling can lead to imputation of spurious causal connections where none exist, less is known about the effects of subsampling on the ability to reliably detect causal connections which do exist. We present a theoretical analysis of the effects of subsampling on Granger-causal inference. Neurophysiological processes typically feature signal propagation delays on multiple time scales; accordingly, we base our analysis on a distributed-lag, continuous-time stochastic model, and consider Granger causality in continuous time at finite prediction horizons. Via exact analytical solutions, we identify relationships among sampling frequency, underlying causal time scales and detectability of causalities. We reveal complex interactions between the time scale(s) of neural signal propagation and sampling frequency. We demonstrate that detectability decays exponentially as the sample time interval increases beyond causal delay times, identify detectability "black spots" and "sweet spots", and show that downsampling may potentially improve detectability. We also demonstrate that the invariance of Granger causality under causal, invertible filtering fails at finite prediction horizons, with particular implications for inference of Granger causality from fMRI data. Our analysis emphasises that sampling rates for causal analysis of neurophysiological time series should be informed by domain-specific time scales, and that state-space modelling should be preferred to purely autoregressive modelling. On the basis of a very general model that captures the structure of neurophysiological processes, we are able to help identify confounds, and offer practical insights, for successful detection of causal connectivity

  3. Parallel processing method for high-speed real time digital pulse processing for gamma-ray spectroscopy

    International Nuclear Information System (INIS)

    Fernandes, A.M.; Pereira, R.C.; Sousa, J.; Neto, A.; Carvalho, P.; Batista, A.J.N.; Carvalho, B.B.; Varandas, C.A.F.; Tardocchi, M.; Gorini, G.

    2010-01-01

    A new data acquisition (DAQ) system was developed to fulfil the requirements of the gamma-ray spectrometer (GRS) JET-EP2 (joint European Torus enhancement project 2), providing high-resolution spectroscopy at very high-count rate (up to few MHz). The system is based on the Advanced Telecommunications Computing Architecture TM (ATCA TM ) and includes a transient record (TR) module with 8 channels of 14 bits resolution at 400 MSamples/s (MSPS) sampling rate, 4 GB of local memory, and 2 field programmable gate array (FPGA) able to perform real time algorithms for data reduction and digital pulse processing. Although at 400 MSPS only fast programmable devices such as FPGAs can be used either for data processing and data transfer, FPGA resources also present speed limitation at some specific tasks, leading to an unavoidable data lost when demanding algorithms are applied. To overcome this problem and foreseeing an increase of the algorithm complexity, a new digital parallel filter was developed, aiming to perform real time pulse processing in the FPGAs of the TR module at the presented sampling rate. The filter is based on the conventional digital time-invariant trapezoidal shaper operating with parallelized data while performing pulse height analysis (PHA) and pile up rejection (PUR). The incoming sampled data is successively parallelized and fed into the processing algorithm block at one fourth of the sampling rate. The following data processing and data transfer is also performed at one fourth of the sampling rate. The algorithm based on data parallelization technique was implemented and tested at JET facilities, where a spectrum was obtained. Attending to the observed results, the PHA algorithm will be improved by implementing the pulse pile up discrimination.

  4. Automatic sample changer control software for automation of neutron activation analysis process in Malaysian Nuclear Agency

    Science.gov (United States)

    Yussup, N.; Ibrahim, M. M.; Rahman, N. A. A.; Mokhtar, M.; Salim, N. A. A.; Soh@Shaari, S. C.; Azman, A.; Lombigit, L.; Azman, A.; Omar, S. A.

    2018-01-01

    Most of the procedures in neutron activation analysis (NAA) process that has been established in Malaysian Nuclear Agency (Nuclear Malaysia) since 1980s were performed manually. These manual procedures carried out by the NAA laboratory personnel are time consuming and inefficient especially for sample counting and measurement process. The sample needs to be changed and the measurement software needs to be setup for every one hour counting time. Both of these procedures are performed manually for every sample. Hence, an automatic sample changer system (ASC) that consists of hardware and software is developed to automate sample counting process for up to 30 samples consecutively. This paper describes the ASC control software for NAA process which is designed and developed to control the ASC hardware and call GammaVision software for sample measurement. The software is developed by using National Instrument LabVIEW development package.

  5. Detecting chaos in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W

    2013-09-01

    Recently, Wiebe and Virgin [Chaos 22, 013136 (2012)] developed an algorithm which detects chaos by analyzing a time series' power spectrum which is computed using the Discrete Fourier Transform (DFT). Their algorithm, like other time series characterization algorithms, requires that the time series be regularly sampled. Real-world data, however, are often irregularly sampled, thus, making the detection of chaotic behavior difficult or impossible with those methods. In this paper, a characterization algorithm is presented, which effectively detects chaos in irregularly sampled time series. The work presented here is a modification of Wiebe and Virgin's algorithm and uses the Lomb-Scargle Periodogram (LSP) to compute a series' power spectrum instead of the DFT. The DFT is not appropriate for irregularly sampled time series. However, the LSP is capable of computing the frequency content of irregularly sampled data. Furthermore, a new method of analyzing the power spectrum is developed, which can be useful for differentiating between chaotic and non-chaotic behavior. The new characterization algorithm is successfully applied to irregularly sampled data generated by a model as well as data consisting of observations of variable stars.

  6. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  7. Study on infrasonic characteristics of coal samples in failure process under uniaxial loading

    Directory of Open Access Journals (Sweden)

    Bing Jia

    Full Text Available To study the precursory failure infrasonic characteristics of coal samples, coal rock stress loading system and infrasonic wave acquisition system were adopted, and infrasonic tests in uniaxial loading process were made for the coal samples in the studied area. Wavelet filtering, fast Fourier transform, and relative infrasonic energy methods were used to analyze the characteristics of the infrasonic waves in the loading process, including time domain characteristics, and relative energy. The analysis results demonstrated that the frequencies of the infrasonic signals in the loading process mainly distribute within 5–10 Hz, which are significantly different from noise signals. The changes of the infrasonic signals show clear periodic characters in time domain. Meanwhile, the relative energy changes of the infrasonic wave also show periodic characters, which are divided into two stages by the yield limit of coal samples, and are clear and easy to be recognized, so that they can be used as the precursory characteristics for recognizing coal sample failures. Moreover, the infrasonic waves generated by coal samples have low frequency and low attenuation, which can be collected without coupling and transmitted in long distance. This study provides an important support for the further in-situ prediction of coal rock failures. Keywords: Infrasound, Relative energy, Time-frequency analysis, Failure prediction, Identification feature

  8. Sample processing device and method

    DEFF Research Database (Denmark)

    2011-01-01

    A sample processing device is disclosed, which sample processing device comprises a first substrate and a second substrate, where the first substrate has a first surface comprising two area types, a first area type with a first contact angle with water and a second area type with a second contact...... angle with water, the first contact angle being smaller than the second contact angle. The first substrate defines an inlet system and a preparation system in areas of the first type which two areas are separated by a barrier system in an area of the second type. The inlet system is adapted to receive...

  9. Sample processing method for the determination of perchlorate in milk

    International Nuclear Information System (INIS)

    Dyke, Jason V.; Kirk, Andrea B.; Kalyani Martinelango, P.; Dasgupta, Purnendu K.

    2006-01-01

    In recent years, many different water sources and foods have been reported to contain perchlorate. Studies indicate that significant levels of perchlorate are present in both human and dairy milk. The determination of perchlorate in milk is particularly important due to its potential health impact on infants and children. As for many other biological samples, sample preparation is more time consuming than the analysis itself. The concurrent presence of large amounts of fats, proteins, carbohydrates, etc., demands some initial cleanup; otherwise the separation column lifetime and the limit of detection are both greatly compromised. Reported milk processing methods require the addition of chemicals such as ethanol, acetic acid or acetonitrile. Reagent addition is undesirable in trace analysis. We report here an essentially reagent-free sample preparation method for the determination of perchlorate in milk. Milk samples are spiked with isotopically labeled perchlorate and centrifuged to remove lipids. The resulting liquid is placed in a disposable centrifugal ultrafilter device with a molecular weight cutoff of 10 kDa, and centrifuged. Approximately 5-10 ml of clear liquid, ready for analysis, is obtained from a 20 ml milk sample. Both bovine and human milk samples have been successfully processed and analyzed by ion chromatography-mass spectrometry (IC-MS). Standard addition experiments show good recoveries. The repeatability of the analytical result for the same sample in multiple sample cleanup runs ranged from 3 to 6% R.S.D. This processing technique has also been successfully applied for the determination of iodide and thiocyanate in milk

  10. Robotic system for process sampling

    International Nuclear Information System (INIS)

    Dyches, G.M.

    1985-01-01

    A three-axis cartesian geometry robot for process sampling was developed at the Savannah River Laboratory (SRL) and implemented in one of the site radioisotope separations facilities. Use of the robot reduces personnel radiation exposure and contamination potential by routinely handling sample containers under operator control in a low-level radiation area. This robot represents the initial phase of a longer term development program to use robotics for further sample automation. Preliminary design of a second generation robot with additional capabilities is also described. 8 figs

  11. Rapid DNA analysis for automated processing and interpretation of low DNA content samples.

    Science.gov (United States)

    Turingan, Rosemary S; Vasantgadkar, Sameer; Palombo, Luke; Hogan, Catherine; Jiang, Hua; Tan, Eugene; Selden, Richard F

    2016-01-01

    Short tandem repeat (STR) analysis of casework samples with low DNA content include those resulting from the transfer of epithelial cells from the skin to an object (e.g., cells on a water bottle, or brim of a cap), blood spatter stains, and small bone and tissue fragments. Low DNA content (LDC) samples are important in a wide range of settings, including disaster response teams to assist in victim identification and family reunification, military operations to identify friend or foe, criminal forensics to identify suspects and exonerate the innocent, and medical examiner and coroner offices to identify missing persons. Processing LDC samples requires experienced laboratory personnel, isolated workstations, and sophisticated equipment, requires transport time, and involves complex procedures. We present a rapid DNA analysis system designed specifically to generate STR profiles from LDC samples in field-forward settings by non-technical operators. By performing STR in the field, close to the site of collection, rapid DNA analysis has the potential to increase throughput and to provide actionable information in real time. A Low DNA Content BioChipSet (LDC BCS) was developed and manufactured by injection molding. It was designed to function in the fully integrated Accelerated Nuclear DNA Equipment (ANDE) instrument previously designed for analysis of buccal swab and other high DNA content samples (Investigative Genet. 4(1):1-15, 2013). The LDC BCS performs efficient DNA purification followed by microfluidic ultrafiltration of the purified DNA, maximizing the quantity of DNA available for subsequent amplification and electrophoretic separation and detection of amplified fragments. The system demonstrates accuracy, precision, resolution, signal strength, and peak height ratios appropriate for casework analysis. The LDC rapid DNA analysis system is effective for the generation of STR profiles from a wide range of sample types. The technology broadens the range of sample

  12. Static sampling of dynamic processes - a paradox?

    Science.gov (United States)

    Mälicke, Mirko; Neuper, Malte; Jackisch, Conrad; Hassler, Sibylle; Zehe, Erwin

    2017-04-01

    Environmental systems monitoring aims at its core at the detection of spatio-temporal patterns of processes and system states, which is a pre-requisite for understanding and explaining their baffling heterogeneity. Most observation networks rely on distributed point sampling of states and fluxes of interest, which is combined with proxy-variables from either remote sensing or near surface geophysics. The cardinal question on the appropriate experimental design of such a monitoring network has up to now been answered in many different ways. Suggested approaches range from sampling in a dense regular grid using for the so-called green machine, transects along typical catenas, clustering of several observations sensors in presumed functional units or HRUs, arrangements of those cluster along presumed lateral flow paths to last not least a nested, randomized stratified arrangement of sensors or samples. Common to all these approaches is that they provide a rather static spatial sampling, while state variables and their spatial covariance structure dynamically change in time. It is hence of key interest how much of our still incomplete understanding stems from inappropriate sampling and how much needs to be attributed to an inappropriate analysis of spatial data sets. We suggest that it is much more promising to analyze the spatial variability of processes, for instance changes in soil moisture values, than to investigate the spatial variability of soil moisture states themselves. This is because wetting of the soil, reflected in a soil moisture increase, is causes by a totally different meteorological driver - rainfall - than drying of the soil. We hence propose that the rising and the falling limbs of soil moisture time series belong essentially to different ensembles, as they are influenced by different drivers. Positive and negative temporal changes in soil moisture need, hence, to be analyzed separately. We test this idea using the CAOS data set as a benchmark

  13. Capillary absorption spectrometer and process for isotopic analysis of small samples

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, M. Lizabeth; Kelly, James F.; Sams, Robert L.; Moran, James J.; Newburn, Matthew K.; Blake, Thomas A.

    2016-03-29

    A capillary absorption spectrometer and process are described that provide highly sensitive and accurate stable absorption measurements of analytes in a sample gas that may include isotopologues of carbon and oxygen obtained from gas and biological samples. It further provides isotopic images of microbial communities that allow tracking of nutrients at the single cell level. It further targets naturally occurring variations in carbon and oxygen isotopes that avoids need for expensive isotopically labeled mixtures which allows study of samples taken from the field without modification. The method also permits sampling in vivo permitting real-time ambient studies of microbial communities.

  14. Remote sampling of process fluids in radiochemical plants

    International Nuclear Information System (INIS)

    Sengar, P.B.; Bhattacharya, R.; Ozarde, P. D.; Rana, D.S.

    1990-01-01

    Sampling of process fluids, continuous or periodic, is an essential requirement in any chemical process plant, so as to keep a control on process variables. In a radiochemical plant the task of taking and conveying the samples is a very tricky affair. This is due to the fact that neither the vessels/equipment containing radioactive effluents can be approached for manual sampling nor sampled fluids can be handled directly. The problems become more accute with higher levels of radioactivity. As such, inovative systems have to be devised to obtain and handle the raioactive samples employing remote operations. The remote sampling system developed in this Division has some of the unique features such as taking only requisite amount of samples in microlitre range, practically maintenance free design, avoidence of excess radioactive fluids coming out of process systems, etc. The paper describes in detail the design of remote sampling system and compares the same with existing systems. The design efforts are towards simplicity in operation, obtaining homogenised representative samples and highly economical on man-rem expenditure. The performance of a prototype system has also been evaluated. (author). 3 refs

  15. Importance Sampling for Stochastic Timed Automata

    DEFF Research Database (Denmark)

    Jegourel, Cyrille; Larsen, Kim Guldstrand; Legay, Axel

    2016-01-01

    We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state-wise cha......We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state...

  16. Precise turnaround time measurement of laboratory processes using radiofrequency identification technology.

    Science.gov (United States)

    Mayer, Horst; Brümmer, Jens; Brinkmann, Thomas

    2011-01-01

    To implement Lean Six Sigma in our central laboratory we conducted a project to measure single pre-analytical steps influencing turnaround time (TAT) of emergency department (ED) serum samples. The traditional approach of extracting data from the Laboratory Information System (LIS) for a retrospective calculation of a mean TAT is not suitable. Therefore, we used radiofrequency identification (RFID) chips for real time tracking of individual samples at any pre-analytical step. 1,200 serum tubes were labelled with RFID chips and were provided to the emergency department. 3 RFID receivers were installed in the laboratory: at the outlet of the pneumatic tube system, at the centrifuge, and in the analyser area. In addition, time stamps of sample entry at the automated sample distributor and communication of results from the analyser were collected from LIS. 1,023 labelled serum tubes arrived at our laboratory. 899 RFID tags were used for TAT calculation. The following transfer times were determined (median 95th percentile in min:sec): pneumatic tube system --> centrifuge (01:25/04:48), centrifuge --> sample distributor (14:06/5:33), sample distributor --> analysis system zone (02:39/15:07), analysis system zone --> result communication (12:42/22:21). Total TAT was calculated at 33:19/57:40 min:sec. Manual processes around centrifugation were identified as a major part of TAT with 44%/60% (median/95th percentile). RFID is a robust, easy to use, and error-free technology and not susceptible to interferences in the laboratory environment. With this study design we were able to measure significant variations in a single manual sample transfer process. We showed that TAT is mainly influenced by manual steps around the centrifugation process and we concluded that centrifugation should be integrated in solutions for total laboratory automation.

  17. Disc valve for sampling erosive process streams

    Science.gov (United States)

    Mrochek, J.E.; Dinsmore, S.R.; Chandler, E.W.

    1986-01-07

    A four-port disc valve is described for sampling erosive, high temperature process streams. A rotatable disc defining opposed first and second sampling cavities rotates between fired faceplates defining flow passageways positioned to be alternatively in axial alignment with the first and second cavities. Silicon carbide inserts and liners composed of [alpha] silicon carbide are provided in the faceplates and in the sampling cavities to limit erosion while providing lubricity for a smooth and precise operation when used under harsh process conditions. 1 fig.

  18. 21 CFR 211.110 - Sampling and testing of in-process materials and drug products.

    Science.gov (United States)

    2010-04-01

    ... PHARMACEUTICALS Production and Process Controls § 211.110 Sampling and testing of in-process materials and drug... capsule weight variation; (2) Disintegration time; (3) Adequacy of mixing to assure uniformity and... production process, e.g., at commencement or completion of significant phases or after storage for long...

  19. Time-Scale and Time-Frequency Analyses of Irregularly Sampled Astronomical Time Series

    Directory of Open Access Journals (Sweden)

    S. Roques

    2005-09-01

    Full Text Available We evaluate the quality of spectral restoration in the case of irregular sampled signals in astronomy. We study in details a time-scale method leading to a global wavelet spectrum comparable to the Fourier period, and a time-frequency matching pursuit allowing us to identify the frequencies and to control the error propagation. In both cases, the signals are first resampled with a linear interpolation. Both results are compared with those obtained using Lomb's periodogram and using the weighted waveletZ-transform developed in astronomy for unevenly sampled variable stars observations. These approaches are applied to simulations and to light variations of four variable stars. This leads to the conclusion that the matching pursuit is more efficient for recovering the spectral contents of a pulsating star, even with a preliminary resampling. In particular, the results are almost independent of the quality of the initial irregular sampling.

  20. Real-time progressive hyperspectral image processing endmember finding and anomaly detection

    CERN Document Server

    Chang, Chein-I

    2016-01-01

    The book covers the most crucial parts of real-time hyperspectral image processing: causality and real-time capability. Recently, two new concepts of real time hyperspectral image processing, Progressive Hyperspectral Imaging (PHSI) and Recursive Hyperspectral Imaging (RHSI). Both of these can be used to design algorithms and also form an integral part of real time hyperpsectral image processing. This book focuses on progressive nature in algorithms on their real-time and causal processing implementation in two major applications, endmember finding and anomaly detection, both of which are fundamental tasks in hyperspectral imaging but generally not encountered in multispectral imaging. This book is written to particularly address PHSI in real time processing, while a book, Recursive Hyperspectral Sample and Band Processing: Algorithm Architecture and Implementation (Springer 2016) can be considered as its companion book. Includes preliminary background which is essential to those who work in hyperspectral ima...

  1. Probabilistic Round Trip Contamination Analysis of a Mars Sample Acquisition and Handling Process Using Markovian Decompositions

    Science.gov (United States)

    Hudson, Nicolas; Lin, Ying; Barengoltz, Jack

    2010-01-01

    A method for evaluating the probability of a Viable Earth Microorganism (VEM) contaminating a sample during the sample acquisition and handling (SAH) process of a potential future Mars Sample Return mission is developed. A scenario where multiple core samples would be acquired using a rotary percussive coring tool, deployed from an arm on a MER class rover is analyzed. The analysis is conducted in a structured way by decomposing sample acquisition and handling process into a series of discrete time steps, and breaking the physical system into a set of relevant components. At each discrete time step, two key functions are defined: The probability of a VEM being released from each component, and the transport matrix, which represents the probability of VEM transport from one component to another. By defining the expected the number of VEMs on each component at the start of the sampling process, these decompositions allow the expected number of VEMs on each component at each sampling step to be represented as a Markov chain. This formalism provides a rigorous mathematical framework in which to analyze the probability of a VEM entering the sample chain, as well as making the analysis tractable by breaking the process down into small analyzable steps.

  2. PROCESS TIME OPTIMIZATION IN DEPOSITOR AND FILLER

    Directory of Open Access Journals (Sweden)

    Jesús Iván Ruíz-Ibarra

    2017-07-01

    Full Text Available As in any industry, in soft drink manufacturing demand, customer service and production is of great importance that forces this production to have their equipment and production machines in optimal conditions for the product to be in the hands of the consumer without delays, therefore it is important to have the established times of each process, since the syrup is elaborated, packaged, distributed, until it is purchased by the consumer. After a chronometer analysis, the most common faults were detected in each analyzed process. In the filler machine the most frequent faults are: accumulation of bottles in the subsequent and previous processes to filling process, which in general the cause of the collection of bottles is due to failures in the other equipment of the production line. In the process of unloading the most common faults are: boxes jammed in bump and pusher (pushing boxes; boxes fallen in rollers and platforms transporter. According to observations in each machine, the actions to be followed are presented to solve the problems that arise. Also described the methodology to obtain results, to data analyze and decisions. Firstly an analysis of operations is done to know each machine, supported by the manuals of the machines and the operators themselves a study of times is done by chronometer to determine the standard time of the process where also they present the most common faults, then observations are made on the machines according to the determined sample size, thus obtaining the information necessary to take measurements and to make the study of optimization of the production processes. An analysis of the predetermined process times is also performed by the MTM methods and the MOST time analysis. The results of operators with MTM: Fault Filler = 0.846 minutes, Faultless Filler = 0.61 minutes, Fault Breaker = 0.74 minutes and Fault Flasher = 0.45 minutes. The results of MOST operators are: Fault Filler = 2.58 minutes, Filler Fails

  3. Sampling inspection for the evaluation of time-dependent reliability of deteriorating systems under imperfect defect detection

    International Nuclear Information System (INIS)

    Kuniewski, Sebastian P.; Weide, Johannes A.M. van der; Noortwijk, Jan M. van

    2009-01-01

    The paper presents a sampling-inspection strategy for the evaluation of time-dependent reliability of deteriorating systems, where the deterioration is assumed to initiate at random times and at random locations. After initiation, defects are weakening the system's resistance. The system becomes unacceptable when at least one defect reaches a critical depth. The defects are assumed to initiate at random times modeled as event times of a non-homogeneous Poisson process (NHPP) and to develop according to a non-decreasing time-dependent gamma process. The intensity rate of the NHPP is assumed to be a combination of a known time-dependent shape function and an unknown proportionality constant. When sampling inspection (i.e. inspection of a selected subregion of the system) results in a number of defect initiations, Bayes' theorem can be used to update prior beliefs about the proportionality constant of the NHPP intensity rate to the posterior distribution. On the basis of a time- and space-dependent Poisson process for the defect initiation, an adaptive Bayesian model for sampling inspection is developed to determine the predictive probability distribution of the time to failure. A potential application is, for instance, the inspection of a large vessel or pipeline suffering pitting/localized corrosion in the oil industry. The possibility of imperfect defect detection is also incorporated in the model.

  4. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    International Nuclear Information System (INIS)

    Aspinall, M D; Joyce, M J; Mackin, R O; Jarrah, Z; Boston, A J; Nolan, P J; Peyton, A J; Hawkes, N P

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s −1 . Events arising from the 7 Li(p, n) 7 Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential

  5. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  6. Renewal processes based on generalized Mittag-Leffler waiting times

    Science.gov (United States)

    Cahoy, Dexter O.; Polito, Federico

    2013-03-01

    The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.

  7. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  8. Robowell: An automated process for monitoring ground water quality using established sampling protocols

    Science.gov (United States)

    Granato, G.E.; Smith, K.P.

    1999-01-01

    Robowell is an automated process for monitoring selected ground water quality properties and constituents by pumping a well or multilevel sampler. Robowell was developed and tested to provide a cost-effective monitoring system that meets protocols expected for manual sampling. The process uses commercially available electronics, instrumentation, and hardware, so it can be configured to monitor ground water quality using the equipment, purge protocol, and monitoring well design most appropriate for the monitoring site and the contaminants of interest. A Robowell prototype was installed on a sewage treatment plant infiltration bed that overlies a well-studied unconfined sand and gravel aquifer at the Massachusetts Military Reservation, Cape Cod, Massachusetts, during a time when two distinct plumes of constituents were released. The prototype was operated from May 10 to November 13, 1996, and quality-assurance/quality-control measurements demonstrated that the data obtained by the automated method was equivalent to data obtained by manual sampling methods using the same sampling protocols. Water level, specific conductance, pH, water temperature, dissolved oxygen, and dissolved ammonium were monitored by the prototype as the wells were purged according to U.S Geological Survey (USGS) ground water sampling protocols. Remote access to the data record, via phone modem communications, indicated the arrival of each plume over a few days and the subsequent geochemical reactions over the following weeks. Real-time availability of the monitoring record provided the information needed to initiate manual sampling efforts in response to changes in measured ground water quality, which proved the method and characterized the screened portion of the plume in detail through time. The methods and the case study described are presented to document the process for future use.

  9. Effects of brief time delays on matching-to-sample abilities in capuchin monkeys (Sapajus spp.).

    Science.gov (United States)

    Truppa, Valentina; De Simone, Diego Antonio; Piano Mortari, Eva; De Lillo, Carlo

    2014-09-01

    Traditionally, studies of delayed matching-to-sample (DMTS) tasks in nonhuman species have focused on the assessment of the limits of the retrieval of information stored in short- and long-term memory systems. However, it is still unclear if visual recognition in these tasks is affected by very brief delay intervals, which are typically used to study rapidly decaying types of visual memory. This study aimed at evaluating if tufted capuchin monkeys' ability to recognise visual stimuli in a DMTS task is affected by (i) the disappearance of the sample stimulus and (ii) the introduction of delay intervals (0.5, 1.0, 2.0 and 3.0s) between the disappearance of the sample and the presentation of the comparison stimuli. The results demonstrated that the simple disappearance of the sample and the introduction of a delay of 0.5s did not affect capuchins' performance either in terms of accuracy or response time. A delay interval of 1.0s produced a significant increase in response time but still did not affect recognition accuracy. By contrast, delays of 2.0 and 3.0s determined a significant increase in response time and a reduction in recognition accuracy. These findings indicate the existence in capuchin monkeys of processes enabling a very accurate retention of stimulus features within time frames comparable to those reported for humans' sensory memory (0.5-1.0s). The extent to which such processes can be considered analogous to the sensory memory processes observed in human visual cognition is discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Ergonomic analysis of radiopharmaceuticals samples preparation process

    International Nuclear Information System (INIS)

    Gomes, Luciene Betzler C.; Santos, Isaac Luquetti dos; Fonseca, Antonio Carlos C. da; Pellini, Marcos Pinto; Rebelo, Ana Maria

    2005-01-01

    The doses of radioisotopes to be administrated in patients for diagnostic effect or therapy are prepared in the radiopharmacological sector. The preparation process adopts techniques that are aimed to reduce the exposition time of the professionals and the absorption of excessive doses for patients. The ergonomic analysis of this process contributes in the prevention of occupational illnesses and to prevent risks of accidents during the routines, providing welfare and security to the involved users and conferring to the process an adequate working standard. In this context it is perceived relevance of studies that deal with the analysis of factors that point with respect to the solution of problems and for establishing proposals that minimize risks in the exercise of the activities. Through a methodology that considers the application of the concepts of Ergonomics, it is searched the improvement of the effectiveness or the quality and reduction of the difficulties lived for the workers. The work prescribed, established through norms and procedures codified will be faced with the work effectively carried through, the real work, shaped to break the correct appreciation, with focus in the activities. This work has as objective to argue an ergonomic analysis of samples preparation process of radioisotopes in the Setor de Radiofarmacia do Hospital Universitario Clementino Fraga Filho da Universidade Federal do Rio de Janeiro (UFRJ). (author)

  11. Quality evaluation of processed clay soil samples.

    Science.gov (United States)

    Steiner-Asiedu, Matilda; Harrison, Obed Akwaa; Vuvor, Frederick; Tano-Debrah, Kwaku

    2016-01-01

    This study assessed the microbial quality of clay samples sold on two of the major Ghanaian markets. The study was a cross-sectional assessing the evaluation of processed clay and effects it has on the nutrition of the consumers in the political capital town of Ghana. The items for the examination was processed clay soil samples. Staphylococcus spp and fecal coliforms including Klebsiella, Escherichia, and Shigella and Enterobacterspp were isolated from the clay samples. Samples from the Kaneshie market in Accra recorded the highest total viable counts 6.5 Log cfu/g and Staphylococcal count 5.8 Log cfu/g. For fecal coliforms, Madina market samples had the highest count 6.5 Log cfu/g and also recorded the highest levels of yeast and mould. For Koforidua, total viable count was highest in the samples from the Zongo market 6.3 Log cfu/g. Central market samples had the highest count of fecal coliforms 4.6 Log cfu/g and yeasts and moulds 6.5 Log cfu/g. "Small" market recorded the highest staphylococcal count 6.2 Log cfu/g. The water activity of the clay samples were low, and ranged between 0.65±0.01 and 0.66±0.00 for samples collected from Koforidua and Accra respectively. The clay samples were found to contain Klebsiella spp. Escherichia, Enterobacter, Shigella spp. staphylococcus spp., yeast and mould. These have health implications when consumed.

  12. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  13. Feel the Time. Time Perception as a Function of Interoceptive Processing

    Directory of Open Access Journals (Sweden)

    Daniele Di Lernia

    2018-03-01

    Full Text Available The nature of time is rooted in our body. Constellations of impulses arising from the flesh constantly create our interoceptive perception and, in turn, the unfolding of these perceptions defines human awareness of time. This study explored the connection between time perception and interoception and proposes the Interoceptive Buffer saturation (IBs index. IBs evaluates subjects’ ability to process salient stimuli from the body by measuring subjective distortions of interoceptive time perception, i.e., the estimated duration of tactile interoceptive stimulations. Thirty female healthy subjects were recruited through consecutive sampling and assessed for common variables related to interoceptive alterations: depressive symptoms (Beck Depression Inventory, BDI-II, eating disorders (EDI-3 risk, and anxiety levels (State Trait Anxiety Inventory, STAI. Interoceptive cardiac accuracy (IAc was assessed as well. Subjects performed verbal time estimation of interoceptive stimuli (IBs delivered using a specifically designed interoceptive tactile stimulator, as well as verbal time estimation of visual and auditory stimuli. Results showed that IBs index positively correlated with IAc, and negatively with EDI-3 Drive for Thinness (DT risk subscale. Moreover, IBs index was positively predicted by IAc, and negatively predicted by DT and somatic factors of depression. Our results suggest that underestimations in interoceptive time perception are connected to different psychological conditions characterized by a diminished processing of high salience stimuli from the body. Conversely, overestimations of the duration of interoceptive stimuli appear to be function of subjects’ ability to correctly perceive their own bodily information. Evidence supported IBs index, fostering the concept of interoceptive treatments for clinical purposes.

  14. Representative process sampling for reliable data analysis

    DEFF Research Database (Denmark)

    Julius, Lars Petersen; Esbensen, Kim

    2005-01-01

    (sampling variances) can be reduced greatly however, and sampling biases can be eliminated completely, by respecting a simple set of rules and guidelines provided by TOS. A systematic approach for description of process heterogeneity furnishes in-depth knowledge about the specific variability of any 1-D lot...

  15. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  16. Laboratory sample turnaround times: do they cause delays in the ED?

    Science.gov (United States)

    Gill, Dipender; Galvin, Sean; Ponsford, Mark; Bruce, David; Reicher, John; Preston, Laura; Bernard, Stephani; Lafferty, Jessica; Robertson, Andrew; Rose-Morris, Anna; Stoneham, Simon; Rieu, Romelie; Pooley, Sophie; Weetch, Alison; McCann, Lloyd

    2012-02-01

    Blood tests are requested for approximately 50% of patients attending the emergency department (ED). The time taken to obtain the results is perceived as a common reason for delay. The objective of this study was therefore to investigate the turnaround time (TAT) for blood results and whether this affects patient length of stay (LOS) and to identify potential areas for improvement. A time-in-motion study was performed at the ED of the John Radcliffe Hospital (JRH), Oxford, UK. The duration of each of the stages leading up to receipt of 101 biochemistry and haematology results was recorded, along with the corresponding patient's LOS. The findings reveal that the mean time for haematology results to become available was 1 hour 6 minutes (95% CI: 29 minutes to 2 hours 13 minutes), while biochemistry samples took 1 hour 42 minutes (95% CI: 1 hour 1 minute to 4 hours 21 minutes), with some positive correlation noted with the patient LOS, but no significant variation between different days or shifts. With the fastest 10% of samples being reported within 35 minutes (haematology) and 1 hour 5 minutes (biochemistry) of request, our study showed that delays can be attributable to laboratory TAT. Given the limited ability to further improve laboratory processes, the solutions to improving TAT need to come from a collaborative and integrated approach that includes strategies before samples reach the laboratory and downstream review of results. © 2010 Blackwell Publishing Ltd.

  17. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  18. Progressive sample processing of band selection for hyperspectral imagery

    Science.gov (United States)

    Liu, Keng-Hao; Chien, Hung-Chang; Chen, Shih-Yu

    2017-10-01

    Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.

  19. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  20. Space-time-modulated stochastic processes

    Science.gov (United States)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  1. A novel time-domain signal processing algorithm for real time ventricular fibrillation detection

    International Nuclear Information System (INIS)

    Monte, G E; Scarone, N C; Liscovsky, P O; Rotter, P

    2011-01-01

    This paper presents an application of a novel algorithm for real time detection of ECG pathologies, especially ventricular fibrillation. It is based on segmentation and labeling process of an oversampled signal. After this treatment, analyzing sequence of segments, global signal behaviours are obtained in the same way like a human being does. The entire process can be seen as a morphological filtering after a smart data sampling. The algorithm does not require any ECG digital signal pre-processing, and the computational cost is low, so it can be embedded into the sensors for wearable and permanent applications. The proposed algorithms could be the input signal description to expert systems or to artificial intelligence software in order to detect other pathologies.

  2. A novel time-domain signal processing algorithm for real time ventricular fibrillation detection

    Science.gov (United States)

    Monte, G. E.; Scarone, N. C.; Liscovsky, P. O.; Rotter S/N, P.

    2011-12-01

    This paper presents an application of a novel algorithm for real time detection of ECG pathologies, especially ventricular fibrillation. It is based on segmentation and labeling process of an oversampled signal. After this treatment, analyzing sequence of segments, global signal behaviours are obtained in the same way like a human being does. The entire process can be seen as a morphological filtering after a smart data sampling. The algorithm does not require any ECG digital signal pre-processing, and the computational cost is low, so it can be embedded into the sensors for wearable and permanent applications. The proposed algorithms could be the input signal description to expert systems or to artificial intelligence software in order to detect other pathologies.

  3. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution

    International Nuclear Information System (INIS)

    Cho, Sanghee; Grazioso, Ron; Zhang Nan; Aykac, Mehmet; Schmand, Matthias

    2011-01-01

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  4. Sample processing procedures and radiocarbon dating

    International Nuclear Information System (INIS)

    Svetlik, Ivo; Tomaskova, Lenka; Dreslerova, Dagmar

    2010-01-01

    The article outlines radiocarbon dating routines and highlights the potential and limitations of this method. The author's institutions have been jointly running a conventional radiocarbon dating laboratory using the international CRL code. A procedure based on the synthesis of benzene is used. Small samples are sent abroad for dating because no AMS instrumentation is available in the Czech Republic so far. Our laboratory plans to introduce routines for the processing of milligram samples and preparation of graphitized targets for AMS

  5. Measuring Sulfur Isotope Ratios from Solid Samples with the Sample Analysis at Mars Instrument and the Effects of Dead Time Corrections

    Science.gov (United States)

    Franz, H. B.; Mahaffy, P. R.; Kasprzak, W.; Lyness, E.; Raaen, E.

    2011-01-01

    The Sample Analysis at Mars (SAM) instrument suite comprises the largest science payload on the Mars Science Laboratory (MSL) "Curiosity" rover. SAM will perform chemical and isotopic analysis of volatile compounds from atmospheric and solid samples to address questions pertaining to habitability and geochemical processes on Mars. Sulfur is a key element of interest in this regard, as sulfur compounds have been detected on the Martian surface by both in situ and remote sensing techniques. Their chemical and isotopic composition can belp constrain environmental conditions and mechanisms at the time of formation. A previous study examined the capability of the SAM quadrupole mass spectrometer (QMS) to determine sulfur isotope ratios of SO2 gas from a statistical perspective. Here we discuss the development of a method for determining sulfur isotope ratios with the QMS by sampling SO2 generated from heating of solid sulfate samples in SAM's pyrolysis oven. This analysis, which was performed with the SAM breadboard system, also required development of a novel treatment of the QMS dead time to accommodate the characteristics of an aging detector.

  6. Impact of temperature and time storage on the microbial detection of oral samples by Checkerboard DNA-DNA hybridization method.

    Science.gov (United States)

    do Nascimento, Cássio; dos Santos, Janine Navarro; Pedrazzi, Vinícius; Pita, Murillo Sucena; Monesi, Nadia; Ribeiro, Ricardo Faria; de Albuquerque, Rubens Ferreira

    2014-01-01

    Molecular diagnosis methods have been largely used in epidemiological or clinical studies to detect and quantify microbial species that may colonize the oral cavity in healthy or disease. The preservation of genetic material from samples remains the major challenge to ensure the feasibility of these methodologies. Long-term storage may compromise the final result. The aim of this study was to evaluate the effect of temperature and time storage on the microbial detection of oral samples by Checkerboard DNA-DNA hybridization. Saliva and supragingival biofilm were taken from 10 healthy subjects, aliquoted (n=364) and processed according to proposed protocols: immediate processing and processed after 2 or 4 weeks, and 6 or 12 months of storage at 4°C, -20°C and -80°C. Either total or individual microbial counts were recorded in lower values for samples processed after 12 months of storage, irrespective of temperatures tested. Samples stored up to 6 months at cold temperatures showed similar counts to those immediately processed. The microbial incidence was also significantly reduced in samples stored during 12 months in all temperatures. Temperature and time of oral samples storage have relevant impact in the detection and quantification of bacterial and fungal species by Checkerboard DNA-DNA hybridization method. Samples should be processed immediately after collection or up to 6 months if conserved at cold temperatures to avoid false-negative results. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Improved quantification accuracy for duplex real-time PCR detection of genetically modified soybean and maize in heat processed foods

    Directory of Open Access Journals (Sweden)

    CHENG Fang

    2013-04-01

    Full Text Available Real-time PCR technique has been widely used in quantitative GMO detection in recent years.The accuracy of GMOs quantification based on the real-time PCR methods is still a difficult problem,especially for the quantification of high processed samples.To develop the suitable and accurate real-time PCR system for high processed GM samples,we made ameliorations to several real-time PCR parameters,including re-designed shorter target DNA fragment,similar lengths of amplified endogenous and exogenous gene targets,similar GC contents and melting temperatures of PCR primers and TaqMan probes.Also,one Heat-Treatment Processing Model (HTPM was established using soybean flour samples containing GM soybean GTS 40-3-2 to validate the effectiveness of the improved real-time PCR system.Tested results showed that the quantitative bias of GM content in heat processed samples were lowered using the new PCR system.The improved duplex real-time PCR was further validated using processed foods derived from GM soybean,and more accurate GM content values in these foods was also achieved.These results demonstrated that the improved duplex real-time PCR would be quite suitable in quantitative detection of high processed food products.

  8. Improved process control through real-time measurement of mineral content

    Energy Technology Data Exchange (ETDEWEB)

    Turler, Daniel; Karaca, Murat; Davis, William B.; Giauque, Robert D.; Hopkins, Deborah

    2001-11-02

    In a highly collaborative research and development project with mining and university partners, sensors and data-analysis tools are being developed for rock-mass characterization and real-time measurement of mineral content. Determining mineralogy prior to mucking in an open-pit mine is important for routing the material to the appropriate processing stream. A possible alternative to lab assay of dust and cuttings obtained from drill holes is continuous on-line sampling and real-time x-ray fluorescence (XRF) spectroscopy. Results presented demonstrate that statistical analyses combined with XRF data can be employed to identify minerals and, possibly, different rock types. The objective is to create a detailed three-dimensional mineralogical map in real time that would improve downstream process efficiency.

  9. Time-variant reliability assessment through equivalent stochastic process transformation

    International Nuclear Information System (INIS)

    Wang, Zequn; Chen, Wei

    2016-01-01

    Time-variant reliability measures the probability that an engineering system successfully performs intended functions over a certain period of time under various sources of uncertainty. In practice, it is computationally prohibitive to propagate uncertainty in time-variant reliability assessment based on expensive or complex numerical models. This paper presents an equivalent stochastic process transformation approach for cost-effective prediction of reliability deterioration over the life cycle of an engineering system. To reduce the high dimensionality, a time-independent reliability model is developed by translating random processes and time parameters into random parameters in order to equivalently cover all potential failures that may occur during the time interval of interest. With the time-independent reliability model, an instantaneous failure surface is attained by using a Kriging-based surrogate model to identify all potential failure events. To enhance the efficacy of failure surface identification, a maximum confidence enhancement method is utilized to update the Kriging model sequentially. Then, the time-variant reliability is approximated using Monte Carlo simulations of the Kriging model where system failures over a time interval are predicted by the instantaneous failure surface. The results of two case studies demonstrate that the proposed approach is able to accurately predict the time evolution of system reliability while requiring much less computational efforts compared with the existing analytical approach. - Highlights: • Developed a new approach for time-variant reliability analysis. • Proposed a novel stochastic process transformation procedure to reduce the dimensionality. • Employed Kriging models with confidence-based adaptive sampling scheme to enhance computational efficiency. • The approach is effective for handling random process in time-variant reliability analysis. • Two case studies are used to demonstrate the efficacy

  10. Charge-Domain Signal Processing of Direct RF Sampling Mixer with Discrete-Time Filters in Bluetooth and GSM Receivers

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available RF circuits for multi-GHz frequencies have recently migrated to low-cost digital deep-submicron CMOS processes. Unfortunately, this process environment, which is optimized only for digital logic and SRAM memory, is extremely unfriendly for conventional analog and RF designs. We present fundamental techniques recently developed that transform the RF and analog circuit design complexity to digitally intensive domain for a wireless RF transceiver, so that it enjoys benefits of digital and switched-capacitor approaches. Direct RF sampling techniques allow great flexibility in reconfigurable radio design. Digital signal processing concepts are used to help relieve analog design complexity, allowing one to reduce cost and power consumption in a reconfigurable design environment. The ideas presented have been used in Texas Instruments to develop two generations of commercial digital RF processors: a single-chip Bluetooth radio and a single-chip GSM radio. We further present details of the RF receiver front end for a GSM radio realized in a 90-nm digital CMOS technology. The circuit consisting of low-noise amplifier, transconductance amplifier, and switching mixer offers 32.5 dB dynamic range with digitally configurable voltage gain of 40 dB down to 7.5 dB. A series of decimation and discrete-time filtering follows the mixer and performs a highly linear second-order lowpass filtering to reject close-in interferers. The front-end gains can be configured with an automatic gain control to select an optimal setting to form a trade-off between noise figure and linearity and to compensate the process and temperature variations. Even under the digital switching activity, noise figure at the 40 dB maximum gain is 1.8 dB and +50 dBm IIP2 at the 34 dB gain. The variation of the input matching versus multiple gains is less than 1 dB. The circuit in total occupies 3.1 mm 2 . The LNA, TA, and mixer consume less than 15.3 mA at a supply voltage of 1.4 V.

  11. Charge-Domain Signal Processing of Direct RF Sampling Mixer with Discrete-Time Filters in Bluetooth and GSM Receivers

    Directory of Open Access Journals (Sweden)

    Ho Yo-Chuol

    2006-01-01

    Full Text Available RF circuits for multi-GHz frequencies have recently migrated to low-cost digital deep-submicron CMOS processes. Unfortunately, this process environment, which is optimized only for digital logic and SRAM memory, is extremely unfriendly for conventional analog and RF designs. We present fundamental techniques recently developed that transform the RF and analog circuit design complexity to digitally intensive domain for a wireless RF transceiver, so that it enjoys benefits of digital and switched-capacitor approaches. Direct RF sampling techniques allow great flexibility in reconfigurable radio design. Digital signal processing concepts are used to help relieve analog design complexity, allowing one to reduce cost and power consumption in a reconfigurable design environment. The ideas presented have been used in Texas Instruments to develop two generations of commercial digital RF processors: a single-chip Bluetooth radio and a single-chip GSM radio. We further present details of the RF receiver front end for a GSM radio realized in a 90-nm digital CMOS technology. The circuit consisting of low-noise amplifier, transconductance amplifier, and switching mixer offers dB dynamic range with digitally configurable voltage gain of 40 dB down to dB. A series of decimation and discrete-time filtering follows the mixer and performs a highly linear second-order lowpass filtering to reject close-in interferers. The front-end gains can be configured with an automatic gain control to select an optimal setting to form a trade-off between noise figure and linearity and to compensate the process and temperature variations. Even under the digital switching activity, noise figure at the 40 dB maximum gain is 1.8 dB and dBm IIP2 at the 34 dB gain. The variation of the input matching versus multiple gains is less than 1 dB. The circuit in total occupies 3.1 . The LNA, TA, and mixer consume less than mA at a supply voltage of 1.4 V.

  12. Real-Time Monitoring of Psychotherapeutic Processes: Concept and Compliance

    Directory of Open Access Journals (Sweden)

    Guenter Karl Schiepek

    2016-05-01

    Full Text Available AbstractObjective. The feasibility of a high-frequency real-time monitoring approach to psychotherapy is outlined and tested for patients’ compliance to evaluate its integration to everyday practice. Criteria concern the ecological momentary assessment, the assessment of therapy-related cognitions and emotions, equidistant time sampling, real-time nonlinear time series analysis, continuous participative process control by client and therapist, and the application of idiographic (person-specific surveys. Methods. The process-outcome monitoring is technically realized by an internet-based device for data collection and data analysis, the Synergetic Navigation System. Its feasibility is documented by a compliance study on 151 clients treated in an inpatient and a day-treatment clinic. Results. We found high compliance rates (mean: 78.3%, median: 89.4% amongst the respondents, independent of the severity of symptoms or the degree of impairment. Compared to other diagnoses, the compliance rate was lower in the group diagnosed with personality disorders. Conclusion. The results support the feasibility of high-frequency monitoring in routine psychotherapy settings. Daily collection of psychological surveys allows for assessment of highly resolved, equidistant time series data which gives insight into the nonlinear qualities of therapeutic change processes (e.g., pattern transitions, critical instabilities.

  13. Real-Time Monitoring of Psychotherapeutic Processes: Concept and Compliance

    Science.gov (United States)

    Schiepek, Günter; Aichhorn, Wolfgang; Gruber, Martin; Strunk, Guido; Bachler, Egon; Aas, Benjamin

    2016-01-01

    Objective: The feasibility of a high-frequency real-time monitoring approach to psychotherapy is outlined and tested for patients' compliance to evaluate its integration to everyday practice. Criteria concern the ecological momentary assessment, the assessment of therapy-related cognitions and emotions, equidistant time sampling, real-time nonlinear time series analysis, continuous participative process control by client and therapist, and the application of idiographic (person-specific) surveys. Methods: The process-outcome monitoring is technically realized by an internet-based device for data collection and data analysis, the Synergetic Navigation System. Its feasibility is documented by a compliance study on 151 clients treated in an inpatient and a day-treatment clinic. Results: We found high compliance rates (mean: 78.3%, median: 89.4%) amongst the respondents, independent of the severity of symptoms or the degree of impairment. Compared to other diagnoses, the compliance rate was lower in the group diagnosed with personality disorders. Conclusion: The results support the feasibility of high-frequency monitoring in routine psychotherapy settings. Daily collection of psychological surveys allows for the assessment of highly resolved, equidistant time series data which gives insight into the nonlinear qualities of therapeutic change processes (e.g., pattern transitions, critical instabilities). PMID:27199837

  14. Off-Policy Reinforcement Learning: Optimal Operational Control for Two-Time-Scale Industrial Processes.

    Science.gov (United States)

    Li, Jinna; Kiumarsi, Bahare; Chai, Tianyou; Lewis, Frank L; Fan, Jialu

    2017-12-01

    Industrial flow lines are composed of unit processes operating on a fast time scale and performance measurements known as operational indices measured at a slower time scale. This paper presents a model-free optimal solution to a class of two time-scale industrial processes using off-policy reinforcement learning (RL). First, the lower-layer unit process control loop with a fast sampling period and the upper-layer operational index dynamics at a slow time scale are modeled. Second, a general optimal operational control problem is formulated to optimally prescribe the set-points for the unit industrial process. Then, a zero-sum game off-policy RL algorithm is developed to find the optimal set-points by using data measured in real-time. Finally, a simulation experiment is employed for an industrial flotation process to show the effectiveness of the proposed method.

  15. Time-changed Ornstein–Uhlenbeck process

    International Nuclear Information System (INIS)

    Gajda, Janusz; Wyłomańska, Agnieszka

    2015-01-01

    The Ornstein–Uhlenbeck process is one of the most popular systems used for financial data description. However, this process has also been examined in the context of many other phenomena. In this paper we consider the so-called time-changed Ornstein–Uhlenbeck process, in which time is replaced by an inverse subordinator of general infinite divisible distribution. Time-changed processes nowadays play an important role in various fields of mathematical physics, chemistry, and biology as well as in finance. In this paper we examine the main characteristics of the time-changed Ornstein–Uhlenbeck process, such as the covariance function. Moreover, we also prove the formula for a generalized fractional Fokker–Planck equation that describes the one-dimensional probability density function of the analyzed system. For three cases of subordinators we show the special forms of obtained general formulas. Furthermore, we mention how to simulate the trajectory of the Ornstein–Uhlenbeck process delayed by a general inverse subordinator. (paper)

  16. Parameter estimation from observations of first-passage times of the Ornstein-Uhlenbeck process and the Feller process

    DEFF Research Database (Denmark)

    Ditlevsen, Susanne Dalager; Ditlevsen, Ove Dalager

    2008-01-01

    a subjective graphical test of the applicability of the OU process or the Feller process when applied to a reasonably large sample of observed first-passage data. These non-stationary processes have several applications in biomedical research, for example as idealized models of the neuron membrane potential...... random time break through to the material surface and become observable. However, the OU process has as a model of physical phenomena the defect of not being bounded to the negative side. This defect is not present for the Feller process, which therefore may provide a useful modeling alternative...

  17. Adaptive interpolation of discrete-time signals that can be modeled as autoregressive processes

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Veldhuis, R.N.J.; Vries, L.B.

    1986-01-01

    The authors present an adaptive algorithm for the restoration of lost sample values in discrete-time signals that can locally be described by means of autoregressive processes. The only restrictions are that the positions of the unknown samples should be known and that they should be embedded in a

  18. Adaptive interpolation of discrete-time signals that can be modeled as autoregressive processes

    NARCIS (Netherlands)

    Janssen, A.J.E.M.; Veldhuis, Raymond N.J.; Vries, Lodewijk B.

    1986-01-01

    This paper presents an adaptive algorithm for the restoration of lost sample values in discrete-time signals that can locally be described by means of autoregressive processes. The only restrictions are that the positions of the unknown samples should be known and that they should be embedded in a

  19. Identification of continuous-time systems from samples of input ...

    Indian Academy of Sciences (India)

    Abstract. This paper presents an introductory survey of the methods that have been developed for identification of continuous-time systems from samples of input±output data. The two basic approaches may be described as (i) the indirect method, where first a discrete-time model is estimated from the sampled data and then ...

  20. Factors affecting medication-order processing time.

    Science.gov (United States)

    Beaman, M A; Kotzan, J A

    1982-11-01

    The factors affecting medication-order processing time at one hospital were studied. The order processing time was determined by directly observing the time to process randomly selected new drug orders on all three work shifts during two one-week periods. An order could list more than one drug for an individual patient. The observer recorded the nature, location, and cost of the drugs ordered, as well as the time to process the order. The time and type of interruptions also were noted. The time to process a drug order was classified as six dependent variables: (1) total time, (2) work time, (3) check time, (4) waiting time I--time from arrival on the dumbwaiter until work was initiated, (5) waiting time II--time between completion of the work and initiation of checking, and (6) waiting time III--time after the check was completed until the order left on the dumbwaiter. The significant predictors of each of the six dependent variables were determined using stepwise multiple regression. The total time to process a prescription order was 58.33 +/- 48.72 minutes; the urgency status of the order was the only significant determinant of total time. Urgency status also significantly predicted the three waiting-time variables. Interruptions and the number of drugs on the order were significant determinants of work time and check time. Each telephone interruption increased the work time by 1.72 minutes. While the results of this study cannot be generalized to other institutions, pharmacy managers can use the method of determining factors that affect medication-order processing time to identify problem areas in their institutions.

  1. When the mean is not enough: Calculating fixation time distributions in birth-death processes.

    Science.gov (United States)

    Ashcroft, Peter; Traulsen, Arne; Galla, Tobias

    2015-10-01

    Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.

  2. On-line sample processing methods in flow analysis

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald

    2008-01-01

    In this chapter, the state of the art of flow injection and related approaches thereof for automation and miniaturization of sample processing regardless of the aggregate state of the sample medium is overviewed. The potential of the various generation of flow injection for implementation of in...

  3. Low-sampling-rate ultra-wideband channel estimation using equivalent-time sampling

    KAUST Repository

    Ballal, Tarig

    2014-09-01

    In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiple pulses. In the proposed scheme, P pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is P times slower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squared error (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms the LMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation. © 2014 IEEE.

  4. On the sample transport time of a pneumatic transfer system

    International Nuclear Information System (INIS)

    Kondo, Yoshihide

    1983-01-01

    The counts accumulated in measuring system are affected by the variations in transport time of the sample on cyclic activation experiments with a mechanical sample transfer system. In use of the pneumatic transfer system, which has been set up, the transport time is variable according to the differences as follows: The form, size and weight of samples, the pneumatic pressure and so on. Comprehending the relationships between the transpot time and these variable factors is essentially important to make experiments with this transfer system. (author)

  5. Imaging systems and algorithms to analyze biological samples in real-time using mobile phone microscopy.

    Science.gov (United States)

    Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E

    2018-01-01

    Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.

  6. REMOTE IN-CELL SAMPLING IMPROVEMENTS PROGRAM AT THESAVANNAH RIVER SITE (SRS) DEFENSE WASTE PROCESSING FACILITY (DWPF)

    International Nuclear Information System (INIS)

    Marzolf, A

    2007-01-01

    Remote Systems Engineering (RSE) of the Savannah River National Lab (SRNL) in combination with the Defense Waste Processing Facility(DWPF) Engineering and Operations has evaluated the existing equipment and processes used in the facility sample cells for 'pulling' samples from the radioactive waste stream and performing equipment in-cell repairs/replacements. RSE has designed and tested equipment for improving remote in-cell sampling evolutions and reducing the time required for in-cell maintenance of existing equipment. The equipment within the present process tank sampling system has been in constant use since the facility start-up over 17 years ago. At present, the method for taking samples within the sample cells produces excessive maintenance and downtime due to frequent failures relative to the sampling station equipment and manipulator. Location and orientation of many sampling stations within the sample cells is not conducive to manipulator operation. The overextension of manipulators required to perform many in-cell operations is a major cause of manipulator failures. To improve sampling operations and reduce downtime due to equipment maintenance, a Portable Sampling Station (PSS), wireless in-cell cameras, and new commercially available sampling technology has been designed, developed and/or adapted and tested. The uniqueness of the design(s), the results of the scoping tests, and the benefits relative to in-cell operation and reduction of waste are presented

  7. Impact of implementing ISO 9001:2008 standard on the Spanish Renal Research Network biobank sample transfer process.

    Science.gov (United States)

    Cortés, M Alicia; Irrazábal, Emanuel; García-Jerez, Andrea; Bohórquez-Magro, Lourdes; Luengo, Alicia; Ortiz-Arduán, Alberto; Calleros, Laura; Rodríguez-Puyol, Manuel

    2014-01-01

    Biobank certification ISO 9001:2008 aims to improve the management of processes performed. This has two objectives: customer satisfaction and continuous improvement. This paper presents the impact of certification ISO 9001:2008 on the sample transfer process in a Spanish biobank specialising in kidney patient samples. The biobank experienced a large increase in the number of samples between 2009 (12,582 vials) and 2010 (37,042 vials). The biobank of the Spanish Renal Research Network (REDinREN), located at the University of Alcalá, has implemented ISO standard 9001:2008 for the effective management of human material given to research centres. Using surveys, we analysed two periods in the “sample transfer” process. During the first period between 1-10-12 and 26-11-12 (8 weeks), minimal changes were made to correct isolated errors. In the second period, between 7-01-13 and 18-02-13 (6 weeks), we carried out general corrective actions. The identification of problems and implementation of corrective actions for certification allowed: a 70% reduction in the process execution time, a significant increase (200%) in the number of samples processed and a 25% improvement in the process. The increase in the number of samples processed was directly related to process improvement. The certification of ISO standard 9001:2008, obtained in July 2013, allowed an improvement of the REDinREN biobank processes to be achieved, which increased quality and customer satisfaction.

  8. Hydrogen determination using secondary processes of recoil proton interaction with sample material

    International Nuclear Information System (INIS)

    Muminov, V.A.; Khajdarov, R.A.; Navalikhin, L.V.; Pardaev, Eh.

    1980-01-01

    Possibilities of hydrogen content determination in different materials according to secondary processes of interaction of recoil protons(irradiation in the field of fast neutrons) with sample material resulting in the appearance of characteristic X-ray irradiation are studied. Excitated irradiation is recorded with a detector placed in the protective screen and located at a certain distance from the object analyzed and neutron source. The method is tested taking as an example analysis of bromine-containing samples (30% Br, 0.5% H) and tungsten dioxide. The determination limit of hydrogen content constitutes 0.05% at confidence coefficient of 0.9. Neutron flux constituted 10 3 neutrons/cm 2 xs, the time of measurement being 15-20 minutes, the distance from the sample to the detector being 12-15 cm [ru

  9. Adaptive control of theophylline therapy: importance of blood sampling times.

    Science.gov (United States)

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  10. Impact of collection container material and holding times on sample integrity for mercury and methylmercury in water

    Energy Technology Data Exchange (ETDEWEB)

    Riscassi, Ami L [ORNL; Miller, Carrie L [ORNL; Brooks, Scott C [ORNL

    2014-01-01

    Mercury (Hg) and methylmercury (MeHg) concentrations in streamwater can vary on short timescales (hourly or less) during storm flow and on a diel cycle; the frequency and timing of sampling required to accurately characterize these dynamics may be difficult to accomplish manually. Automated sampling can assist in sample collection; however use has been limited for Hg and MeHg analysis due to stability concerns of trace concentrations during extended storage times. We examined the viability of using automated samplers with disposable low-density polyethylene (LDPE) sample bags to collect industrially contaminated streamwater for unfiltered and filtered Hg and MeHg analysis. Specifically we investigated the effect of holding times ranging from hours to days on streamwater collected during baseflow and storm flow. Unfiltered and filtered Hg and MeHg concentrations decreased with increases in time prior to sample processing; holding times of 24 hours or less resulted in concentration changes (mean 11 7% different) similar to variability in duplicates collected manually during analogous field conditions (mean 7 10% different). Comparisons of samples collected with manual and automated techniques throughout a year for a wide range of stream conditions were also found to be similar to differences observed between duplicate grab samples. These results demonstrate automated sampling into LDPE bags with holding times of 24 hours or less can be effectively used to collect streamwater for Hg and MeHg analysis, and encourage the testing of these materials and methods for implementation in other aqueous systems where high-frequency sampling is warranted.

  11. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    Science.gov (United States)

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  12. Practical experience with IEEE 1588 high precision time synchronization in electrical substation based on IEC 61850 process bus

    Energy Technology Data Exchange (ETDEWEB)

    Moore, R.; Goraj, M.J.; McGhee, J. [RuggedCom Inc., Concord, ON (Canada)

    2010-07-01

    This paper discussed a time synchronization and dynamic multicast filtering procedure conducted on an IEC 61850 process bus. The Institute of Electrical and Electronic (IEEE) 1588 time synchronization and dynamic multicast filtering procedure was conducted at a substation equipped with non-conventional instrument transformers (NCIT) and intelligent circuit breakers. The process bus interconnected IEDs within a bay that included a real time sampled value (SV) measurement system. The system was designed to reduce the use of copper wiring and to eliminate high energy signal processes. Digitized sampled measured values were sent from the electronic instrument transformers to protect and control relays. A merging unit was used to enable the transmission of the digitized current and voltage measurements across an ethernet network. Two sampling rates were supplied for power system monitoring and protection applications. The merging units continuously sent sampling values of current and voltages acquired from primary equipment. Precision time protocol systems were discussed, and issues related to time synchronization were reviewed. A network topology was provided. 4 refs., 4 figs.

  13. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  14. Storm real-time processing cookbook

    CERN Document Server

    Anderson, Quinton

    2013-01-01

    A Cookbook with plenty of practical recipes for different uses of Storm.If you are a Java developer with basic knowledge of real-time processing and would like to learn Storm to process unbounded streams of data in real time, then this book is for you.

  15. A novel approach to process carbonate samples for radiocarbon measurements with helium carrier gas

    Energy Technology Data Exchange (ETDEWEB)

    Wacker, L., E-mail: wacker@phys.ethz.ch [Laboratory of Ion Beam Physics, ETH Zurich, 8093 Zurich (Switzerland); Fueloep, R.-H. [Institute of Geology and Mineralogy, University of Cologne, 50674 Cologne (Germany); Hajdas, I. [Laboratory of Ion Beam Physics, ETH Zurich, 8093 Zurich (Switzerland); Molnar, M. [Laboratory of Ion Beam Physics, ETH Zurich, 8093 Zurich (Switzerland); Institute of Nuclear Research, Hungarian Academy of Sciences, 4026 Debrecen (Hungary); Rethemeyer, J. [Institute of Geology and Mineralogy, University of Cologne, 50674 Cologne (Germany)

    2013-01-15

    Most laboratories prepare carbonates samples for radiocarbon analysis by acid decomposition in evacuated glass tubes and subsequent reduction of the evolved CO{sub 2} to graphite in self-made reduction manifolds. This process is time consuming and labor intensive. In this work, we have tested a new approach for the preparation of carbonate samples, where any high-vacuum system is avoided and helium is used as a carrier gas. The liberation of CO{sub 2} from carbonates with phosphoric acid is performed in a similar way as it is often done in stable isotope ratio mass spectrometry where CO{sub 2} is released with acid in septum sealed tube under helium atmosphere. The formed CO{sub 2} is later flushed in a helium flow by means of a double-walled needle mounted from the tubes to the zeolite trap of the automated graphitization equipment (AGE). It essentially replaces the elemental analyzer normally used for the combustion of organic samples. The process can be fully automated from sampling the released CO{sub 2} in the septum-sealed tubes with a commercially available auto-sampler to the graphitization with the automated graphitization. The new method yields in low sample blanks of about 50000 years. Results of processed reference materials (IAEA-C2, FIRI-C) are in agreement with their consensus values.

  16. Surface studies of plasma processed Nb samples

    International Nuclear Information System (INIS)

    Tyagi, Puneet V.; Doleans, Marc; Hannah, Brian S.; Afanador, Ralph; Stewart, Stephen; Mammosser, John; Howell, Matthew P; Saunders, Jeffrey W; Degraff, Brian D; Kim, Sang-Ho

    2015-01-01

    Contaminants present at top surface of superconducting radio frequency (SRF) cavities can act as field emitters and restrict the cavity accelerating gradient. A room temperature in-situ plasma processing technology for SRF cavities aiming to clean hydrocarbons from inner surface of cavities has been recently developed at the Spallation Neutron Source (SNS). Surface studies of the plasma-processed Nb samples by Secondary ion mass spectrometry (SIMS) and Scanning Kelvin Probe (SKP) showed that the NeO_2 plasma processing is very effective to remove carbonaceous contaminants from top surface and improves the surface work function by 0.5 to 1.0 eV.

  17. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 2: Extension to time-frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.

  18. Recommended practice for process sampling for partial pressure analysis

    International Nuclear Information System (INIS)

    Blessing, James E.; Ellefson, Robert E.; Raby, Bruce A.; Brucker, Gerardo A.; Waits, Robert K.

    2007-01-01

    This Recommended Practice describes and recommends various procedures and types of apparatus for obtaining representative samples of process gases from >10 -2 Pa (10 -4 Torr) for partial pressure analysis using a mass spectrometer. The document was prepared by a subcommittee of the Recommended Practices Committee of the American Vacuum Society. The subcommittee was comprised of vacuum users and manufacturers of mass spectrometer partial pressure analyzers who have practical experience in the sampling of process gas atmospheres

  19. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    Science.gov (United States)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  20. Effect of Temperature, Time, and Material Thickness on the Dehydration Process of Tomato

    Directory of Open Access Journals (Sweden)

    A. F. K. Correia

    2015-01-01

    Full Text Available This study aimed to evaluate the effects of temperature, time, and thickness of tomatoes fruits during adiabatic drying process. Dehydration, a simple and inexpensive process compared to other conservation methods, is widely used in the food industry in order to ensure a long shelf life for the product due to the low water activity. This study aimed to obtain the best processing conditions to avoid losses and keep product quality. Factorial design and surface response methodology were applied to fit predictive mathematical models. In the dehydration of tomatoes through the adiabatic process, temperature, time, and sample thickness, which greatly contribute to the physicochemical and sensory characteristics of the final product, were evaluated. The optimum drying conditions were 60°C with the lowest thickness level and shorter time.

  1. Direct analysis in real time mass spectrometry, a process analytical technology tool for real-time process monitoring in botanical drug manufacturing.

    Science.gov (United States)

    Wang, Lu; Zeng, Shanshan; Chen, Teng; Qu, Haibin

    2014-03-01

    A promising process analytical technology (PAT) tool has been introduced for batch processes monitoring. Direct analysis in real time mass spectrometry (DART-MS), a means of rapid fingerprint analysis, was applied to a percolation process with multi-constituent substances for an anti-cancer botanical preparation. Fifteen batches were carried out, including ten normal operations and five abnormal batches with artificial variations. The obtained multivariate data were analyzed by a multi-way partial least squares (MPLS) model. Control trajectories were derived from eight normal batches, and the qualification was tested by R(2) and Q(2). Accuracy and diagnosis capability of the batch model were then validated by the remaining batches. Assisted with high performance liquid chromatography (HPLC) determination, process faults were explained by corresponding variable contributions. Furthermore, a batch level model was developed to compare and assess the model performance. The present study has demonstrated that DART-MS is very promising in process monitoring in botanical manufacturing. Compared with general PAT tools, DART-MS offers a particular account on effective compositions and can be potentially used to improve batch quality and process consistency of samples in complex matrices. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Perfluoroalkyl Acid Concentrations in Blood Samples Subjected to Transportation and Processing Delay

    DEFF Research Database (Denmark)

    Bach, Cathrine Carlsen; Henriksen, Tine Brink; Bossi, Rossana

    2015-01-01

    and transportation prior to processing and samples with immediate processing and freezing. METHODS: Pregnant women recruited at Aarhus University Hospital, Denmark, (n = 88) provided paired blood samples. For each pair of samples, one was immediately processed and plasma was frozen, and the other was delayed...... and transported as whole blood before processing and freezing of plasma (similar to the Danish National Birth Cohort). We measured 12 perfluoroalkyl acids and present results for compounds with more than 50% of samples above the lower limit of quantification. RESULTS: For samples taken in the winter, relative...... differences between the paired samples ranged between -77 and +38% for individual perfluoroalkyl acids. In most cases concentrations were lower in the delayed and transported samples, e.g. the relative difference was -29% (95% confidence interval -30; -27) for perfluorooctane sulfonate. For perfluorooctanoate...

  3. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  4. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  5. Why in situ, real-time characterization of thin film growth processes?

    International Nuclear Information System (INIS)

    Auciello, O.; Krauss, A.R.

    1995-01-01

    Since thin-film growth occurs at the surface, the analytical methods should be highly surface-specific. although subsurface diffusion and chemical processes also affect film properties. Sampling depth and ambient-gas is compatibility are key factors which must be considered when choosing in situ probes of thin-film growth phenomena. In most cases, the sampling depth depends on the mean range of the exit species (ion, photon, or electron) in the sample. The techniques that are discussed in this issue of the MRS Bulletin (1) have been chosen because they may be used for in situ, real-time analysis of film-growth phenomena in vacuum and in the presence of ambient gases resulting either from the deposition process or as a requirement for the production of the desired chemical phase. A second criterion for inclusion is that the instrumentation be sufficiently compact and inexpensive to permit use as a dedicated tool in a thin-film deposition system

  6. Sampling genetic diversity in the sympatrically and allopatrically speciating Midas cichlid species complex over a 16 year time series

    Directory of Open Access Journals (Sweden)

    Bunje Paul ME

    2007-02-01

    Full Text Available Abstract Background Speciation often occurs in complex or uncertain temporal and spatial contexts. Processes such as reinforcement, allopatric divergence, and assortative mating can proceed at different rates and with different strengths as populations diverge. The Central American Midas cichlid fish species complex is an important case study for understanding the processes of speciation. Previous analyses have demonstrated that allopatric processes led to species formation among the lakes of Nicaragua as well as sympatric speciation that is occurring within at least one crater lake. However, since speciation is an ongoing process and sampling genetic diversity of such lineages can be biased by collection scheme or random factors, it is important to evaluate the robustness of conclusions drawn on individual time samples. Results In order to assess the validity and reliability of inferences based on different genetic samples, we have analyzed fish from several lakes in Nicaragua sampled at three different times over 16 years. In addition, this time series allows us to analyze the population genetic changes that have occurred between lakes, where allopatric speciation has operated, as well as between different species within lakes, some of which have originated by sympatric speciation. Focusing on commonly used genetic markers, we have analyzed both DNA sequences from the complete mitochondrial control region as well as nuclear DNA variation at ten microsatellite loci from these populations, sampled thrice in a 16 year time period, to develop a robust estimate of the population genetic history of these diversifying lineages. Conclusion The conclusions from previous work are well supported by our comprehensive analysis. In particular, we find that the genetic diversity of derived crater lake populations is lower than that of the source population regardless of when and how each population was sampled. Furthermore, changes in various estimates of

  7. RAPID PROCESSING OF ARCHIVAL TISSUE SAMPLES FOR PROTEOMIC ANALYSIS USING PRESSURE-CYCLING TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Vinuth N. Puttamallesh1,2

    2017-06-01

    Full Text Available Advent of mass spectrometry based proteomics has revolutionized our ability to study proteins from biological specimen in a high-throughput manner. Unlike cell line based studies, biomedical research involving tissue specimen is often challenging due to limited sample availability. In addition, investigation of clinically relevant research questions often requires enormous amount of time for sample collection prospectively. Formalin fixed paraffin embedded (FFPE archived tissue samples are a rich source of tissue specimen for biomedical research. However, there are several challenges associated with analysing FFPE samples. Protein cross-linking and degradation of proteins particularly affects proteomic analysis. We demonstrate that barocycler that uses pressure-cycling technology enables efficient protein extraction and processing of small amounts of FFPE tissue samples for proteomic analysis. We identified 3,525 proteins from six 10µm esophageal squamous cell carcinoma (ESCC tissue sections. Barocycler allows efficient protein extraction and proteolytic digestion of proteins from FFPE tissue sections at par with conventional methods.

  8. Dynamic failure of dry and fully saturated limestone samples based on incubation time concept

    Directory of Open Access Journals (Sweden)

    Yuri V. Petrov

    2017-02-01

    Full Text Available This paper outlines the results of experimental study of the dynamic rock failure based on the comparison of dry and saturated limestone samples obtained during the dynamic compression and split tests. The tests were performed using the Kolsky method and its modifications for dynamic splitting. The mechanical data (e.g. strength, time and energy characteristics of this material at high strain rates are obtained. It is shown that these characteristics are sensitive to the strain rate. A unified interpretation of these rate effects, based on the structural–temporal approach, is hereby presented. It is demonstrated that the temporal dependence of the dynamic compressive and split tensile strengths of dry and saturated limestone samples can be predicted by the incubation time criterion. Previously discovered possibilities to optimize (minimize the energy input for the failure process is discussed in connection with industrial rock failure processes. It is shown that the optimal energy input value associated with critical load, which is required to initialize failure in the rock media, strongly depends on the incubation time and the impact duration. The optimal load shapes, which minimize the momentum for a single failure impact, are demonstrated. Through this investigation, a possible approach to reduce the specific energy required for rock cutting by means of high-frequency vibrations is also discussed.

  9. Perfluoroalkyl Acid Concentrations in Blood Samples Subjected to Transportation and Processing Delay.

    Science.gov (United States)

    Bach, Cathrine Carlsen; Henriksen, Tine Brink; Bossi, Rossana; Bech, Bodil Hammer; Fuglsang, Jens; Olsen, Jørn; Nohr, Ellen Aagaard

    2015-01-01

    In studies of perfluoroalkyl acids, the validity and comparability of measured concentrations may be affected by differences in the handling of biospecimens. We aimed to investigate whether measured plasma levels of perfluoroalkyl acids differed between blood samples subjected to delay and transportation prior to processing and samples with immediate processing and freezing. Pregnant women recruited at Aarhus University Hospital, Denmark, (n = 88) provided paired blood samples. For each pair of samples, one was immediately processed and plasma was frozen, and the other was delayed and transported as whole blood before processing and freezing of plasma (similar to the Danish National Birth Cohort). We measured 12 perfluoroalkyl acids and present results for compounds with more than 50% of samples above the lower limit of quantification. For samples taken in the winter, relative differences between the paired samples ranged between -77 and +38% for individual perfluoroalkyl acids. In most cases concentrations were lower in the delayed and transported samples, e.g. the relative difference was -29% (95% confidence interval -30; -27) for perfluorooctane sulfonate. For perfluorooctanoate there was no difference between the two setups [corresponding estimate 1% (0, 3)]. Differences were negligible in the summer for all compounds. Transport of blood samples and processing delay, similar to conditions applied in some large, population-based studies, may affect measured perfluoroalkyl acid concentrations, mainly when outdoor temperatures are low. Attention to processing conditions is needed in studies of perfluoroalkyl acid exposure in humans.

  10. Target Tracking of a Linear Time Invariant System under Irregular Sampling

    Directory of Open Access Journals (Sweden)

    Jin Xue-Bo

    2012-11-01

    Full Text Available Due to event-triggered sampling in a system, or maybe with the aim of reducing data storage, tracking many applications will encounter irregular sampling time. By calculating the matrix exponential using an inverse Laplace transform, this paper transforms the irregular sampling tracking problem to the problem of tracking with time-varying parameters of a system. Using the common Kalman filter, the developed method is used to track a target for the simulated trajectory and video tracking. The results of simulation experiments have shown that it can obtain good estimation performance even at a very high irregular rate of measurement sampling time.

  11. An Automated Sample Processing System for Planetary Exploration

    Science.gov (United States)

    Soto, Juancarlos; Lasnik, James; Roark, Shane; Beegle, Luther

    2012-01-01

    An Automated Sample Processing System (ASPS) for wet chemistry processing of organic materials on the surface of Mars has been jointly developed by Ball Aerospace and the Jet Propulsion Laboratory. The mechanism has been built and tested to demonstrate TRL level 4. This paper describes the function of the system, mechanism design, lessons learned, and several challenges that were overcome.

  12. Two-Agent Single-Machine Scheduling of Jobs with Time-Dependent Processing Times and Ready Times

    Directory of Open Access Journals (Sweden)

    Jan-Yee Kung

    2013-01-01

    Full Text Available Scheduling involving jobs with time-dependent processing times has recently attracted much research attention. However, multiagent scheduling with simultaneous considerations of jobs with time-dependent processing times and ready times is relatively unexplored. Inspired by this observation, we study a two-agent single-machine scheduling problem in which the jobs have both time-dependent processing times and ready times. We consider the model in which the actual processing time of a job of the first agent is a decreasing function of its scheduled position while the actual processing time of a job of the second agent is an increasing function of its scheduled position. In addition, each job has a different ready time. The objective is to minimize the total completion time of the jobs of the first agent with the restriction that no tardy job is allowed for the second agent. We propose a branch-and-bound and several genetic algorithms to obtain optimal and near-optimal solutions for the problem, respectively. We also conduct extensive computational results to test the proposed algorithms and examine the impacts of different problem parameters on their performance.

  13. Output Information Based Fault-Tolerant Iterative Learning Control for Dual-Rate Sampling Process with Disturbances and Output Delay

    Directory of Open Access Journals (Sweden)

    Hongfeng Tao

    2018-01-01

    Full Text Available For a class of single-input single-output (SISO dual-rate sampling processes with disturbances and output delay, this paper presents a robust fault-tolerant iterative learning control algorithm based on output information. Firstly, the dual-rate sampling process with output delay is transformed into discrete system in state-space model form with slow sampling rate without time delay by using lifting technology; then output information based fault-tolerant iterative learning control scheme is designed and the control process is turned into an equivalent two-dimensional (2D repetitive process. Moreover, based on the repetitive process stability theory, the sufficient conditions for the stability of system and the design method of robust controller are given in terms of linear matrix inequalities (LMIs technique. Finally, the flow control simulations of two flow tanks in series demonstrate the feasibility and effectiveness of the proposed method.

  14. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  15. Non-Cartesian MRI scan time reduction through sparse sampling

    NARCIS (Netherlands)

    Wajer, F.T.A.W.

    2001-01-01

    Non-Cartesian MRI Scan-Time Reduction through Sparse Sampling Magnetic resonance imaging (MRI) signals are measured in the Fourier domain, also called k-space. Samples of the MRI signal can not be taken at will, but lie along k-space trajectories determined by the magnetic field gradients. MRI

  16. Design of FPGA based high-speed data acquisition and real-time data processing system on J-TEXT tokamak

    International Nuclear Information System (INIS)

    Zheng, W.; Liu, R.; Zhang, M.; Zhuang, G.; Yuan, T.

    2014-01-01

    Highlights: • It is a data acquisition system for polarimeter–interferometer diagnostic on J-TEXT tokamak based on FPGA and PXIe devices. • The system provides a powerful data acquisition and real-time data processing performance. • Users can implement different data processing applications on the FPGA in a short time. • This system supports EPICS and has been integrated into the J-TEXT CODAC system. - Abstract: Tokamak experiment requires high-speed data acquisition and processing systems. In traditional data acquisition system, the sampling rate, channel numbers and processing speed are limited by bus throughput and CPU speed. This paper presents a data acquisition and processing system based on FPGA. The data can be processed in real-time before it is passed to the CPU. It provides processing ability for more channels with higher sampling rates than the traditional data acquisition system while ensuring deterministic real-time performance. A working prototype is developed for the newly built polarimeter–interferometer diagnostic system on the Joint Texas Experimental Tokamak (J-TEXT). It provides 16 channels with 120 MHz maximum sampling rate and 16 bit resolution. The onboard FPGA is able to calculate the plasma electron density and Faraday rotation angel. A RAID 5 storage device is adopted providing 700 MB/s read–write speed to buffer the data to the hard disk continuously for better performance

  17. Solubility of airborne uranium samples from uranium processing plant

    International Nuclear Information System (INIS)

    Kravchik, T.; Oved, S.; Sarah, R.; Gonen, R.; Paz-Tal, O.; Pelled, O.; German, U.; Tshuva, A.

    2005-01-01

    Full text: During the production and machining processes of uranium metal, aerosols might be released to the air. Inhalation of these aerosols is the main route of internal exposure of workers. To assess the radiation dose from the intake of these uranium compounds it is necessary to know their absorption type, based on their dissolution rate in extracellular aqueous environment of lung fluid. The International Commission on Radiological Protection (ICRP) has assigned UF4 and U03 to absorption type M (blood absorption which contains a 10 % fraction with an absorption rate of 10 minutes and 90 % fraction with an absorption rate of 140 fays) and UO2 and U3O8 to absorption type S (blood absorption rate with a half-time of 7000 days) in the ICRP-66 model.The solubility classification of uranium compounds defined by the ICRP can serve as a general guidance. At specific workplaces, differences can be encountered, because of differences in compounds production process and the presence of additional compounds, with different solubility characteristics. According to ICRP recommendations, material-specific rates of absorption should be preferred to default parameters whenever specific experimental data exists. Solubility profiles of uranium aerosols were determined by performing in vitro chemical solubility tests on air samples taken from uranium production and machining facilities. The dissolution rate was determined over 100 days in a simultant solution of the extracellular airway lining fluid. The filter sample was immersed in a test vial holding 60 ml of simultant fluid, which was maintained at a 37 o C inside a thermostatic bath and at a physiological pH of 7.2-7.6. The test vials with the solution were shaken to simulate the conditions inside the extracellular aqueous environment of the lung as much as possible. The tests indicated that the uranium aerosols samples taken from the metal production and machining facilities at the Nuclear Research Center Negev (NRCN

  18. Event processing time prediction at the CMS experiment of the Large Hadron Collider

    International Nuclear Information System (INIS)

    Cury, Samir; Gutsche, Oliver; Kcira, Dorian

    2014-01-01

    The physics event reconstruction is one of the biggest challenges for the computing of the LHC experiments. Among the different tasks that computing systems of the CMS experiment performs, the reconstruction takes most of the available CPU resources. The reconstruction time of single collisions varies according to event complexity. Measurements were done in order to determine this correlation quantitatively, creating means to predict it based on the data-taking conditions of the input samples. Currently the data processing system splits tasks in groups with the same number of collisions and does not account for variations in the processing time. These variations can be large and can lead to a considerable increase in the time it takes for CMS workflows to finish. The goal of this study was to use estimates on processing time to more efficiently split the workflow into jobs. By considering the CPU time needed for each job the spread of the job-length distribution in a workflow is reduced.

  19. Atmospheric vs. anaerobic processing of metabolome samples for the metabolite profiling of a strict anaerobic bacterium, Clostridium acetobutylicum.

    Science.gov (United States)

    Lee, Sang-Hyun; Kim, Sooah; Kwon, Min-A; Jung, Young Hoon; Shin, Yong-An; Kim, Kyoung Heon

    2014-12-01

    Well-established metabolome sample preparation is a prerequisite for reliable metabolomic data. For metabolome sampling of a Gram-positive strict anaerobe, Clostridium acetobutylicum, fast filtration and metabolite extraction with acetonitrile/methanol/water (2:2:1, v/v) at -20°C under anaerobic conditions has been commonly used. This anaerobic metabolite processing method is laborious and time-consuming since it is conducted in an anaerobic chamber. Also, there have not been any systematic method evaluation and development of metabolome sample preparation for strict anaerobes and Gram-positive bacteria. In this study, metabolome sampling and extraction methods were rigorously evaluated and optimized for C. acetobutylicum by using gas chromatography/time-of-flight mass spectrometry-based metabolomics, in which a total of 116 metabolites were identified. When comparing the atmospheric (i.e., in air) and anaerobic (i.e., in an anaerobic chamber) processing of metabolome sample preparation, there was no significant difference in the quality and quantity of the metabolomic data. For metabolite extraction, pure methanol at -20°C was a better solvent than acetonitrile/methanol/water (2:2:1, v/v/v) at -20°C that is frequently used for C. acetobutylicum, and metabolite profiles were significantly different depending on extraction solvents. This is the first evaluation of metabolite sample preparation under aerobic processing conditions for an anaerobe. This method could be applied conveniently, efficiently, and reliably to metabolome analysis for strict anaerobes in air. © 2014 Wiley Periodicals, Inc.

  20. Aggregation and sampling in deterministic chaos: implications for chaos identification in hydrological processes

    Directory of Open Access Journals (Sweden)

    J. D. Salas

    2005-01-01

    Full Text Available A review of the literature reveals conflicting results regarding the existence and inherent nature of chaos in hydrological processes such as precipitation and streamflow, i.e. whether they are low dimensional chaotic or stochastic. This issue is examined further in this paper, particularly the effect that certain types of transformations, such as aggregation and sampling, may have on the identification of the dynamics of the underlying system. First, we investigate the dynamics of daily streamflows for two rivers in Florida, one with strong surface and groundwater storage contributions and the other with a lesser basin storage contribution. Based on estimates of the delay time, the delay time window, and the correlation integral, our results suggest that the river with the stronger basin storage contribution departs significantly from the behavior of a chaotic system, while the departure is less significant for the river with the smaller basin storage contribution. We pose the hypothesis that the chaotic behavior depicted on continuous precipitation fields or small time-step precipitation series becomes less identifiable as the aggregation (or sampling time step increases. Similarly, because streamflows result from a complex transformation of precipitation that involves accumulating and routing excess rainfall throughout the basin and adding surface and groundwater flows, the end result may be that streamflows at the outlet of the basin depart from low dimensional chaotic behavior. We also investigate the effect of aggregation and sampling using series derived from the Lorenz equations and show that, as the aggregation and sampling scales increase, the chaotic behavior deteriorates and eventually ceases to show evidence of low dimensional determinism.

  1. Sample preparation

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Sample preparation prior to HPLC analysis is certainly one of the most important steps to consider in trace or ultratrace analysis. For many years scientists have tried to simplify the sample preparation process. It is rarely possible to inject a neat liquid sample or a sample where preparation may not be any more complex than dissolution of the sample in a given solvent. The last process alone can remove insoluble materials, which is especially helpful with the samples in complex matrices if other interactions do not affect extraction. Here, it is very likely a large number of components will not dissolve and are, therefore, eliminated by a simple filtration process. In most cases, the process of sample preparation is not as simple as dissolution of the component interest. At times, enrichment is necessary, that is, the component of interest is present in very large volume or mass of material. It needs to be concentrated in some manner so a small volume of the concentrated or enriched sample can be injected into HPLC. 88 refs

  2. Processing of visually presented clock times.

    Science.gov (United States)

    Goolkasian, P; Park, D C

    1980-11-01

    The encoding and representation of visually presented clock times was investigated in three experiments utilizing a comparative judgment task. Experiment 1 explored the effects of comparing times presented in different formats (clock face, digit, or word), and Experiment 2 examined angular distance effects created by varying positions of the hands on clock faces. In Experiment 3, encoding and processing differences between clock faces and digitally presented times were directly measured. Same/different reactions to digitally presented times were faster than to times presented on a clock face, and this format effect was found to be a result of differences in processing that occurred after encoding. Angular separation also had a limited effect on processing. The findings are interpreted within the framework of theories that refer to the importance of representational codes. The applicability to the data of Bank's semantic-coding theory, Paivio's dual-coding theory, and the levels-of-processing view of memory are discussed.

  3. In-Sample Confidence Bands and Out-of-Sample Forecast Bands for Time-Varying Parameters in Observation Driven Models

    NARCIS (Netherlands)

    Blasques, F.; Koopman, S.J.; Lasak, K.A.; Lucas, A.

    2016-01-01

    We study the performances of alternative methods for calculating in-sample confidence and out-of-sample forecast bands for time-varying parameters. The in-sample bands reflect parameter uncertainty, while the out-of-sample bands reflect not only parameter uncertainty, but also innovation

  4. Testing results of Monte Carlo sampling processes in MCSAD

    International Nuclear Information System (INIS)

    Pinnera, I.; Cruz, C.; Abreu, Y.; Leyva, A.; Correa, C.; Demydenko, C.

    2009-01-01

    The Monte Carlo Simulation of Atom Displacements (MCSAD) is a code implemented by the authors to simulate the complete process of atom displacement (AD) formation. This code makes use of the Monte Carlo (MC) method to sample all the processes involved in the gamma and electronic radiation transport through matter. The kernel of the calculations applied to this code relies on a model based on an algorithm developed by the authors, which firstly splits out multiple electron elastic scattering events from those single ones at higher scattering angles and then, from the last one, sampling those leading to AD at high transferred atomic recoil energies. Some tests have been developed to check the sampling algorithms with the help of the corresponding theoretical distribution functions. Satisfactory results have been obtained, which indicate the strength of the methods and subroutines used in the code. (Author)

  5. Endophytic bacterial community of grapevine leaves influenced by sampling date and phytoplasma infection process.

    Science.gov (United States)

    Bulgari, Daniela; Casati, Paola; Quaglino, Fabio; Bianco, Piero A

    2014-07-21

    Endophytic bacteria benefit host plant directly or indirectly, e.g. by biocontrol of the pathogens. Up to now, their interactions with the host and with other microorganisms are poorly understood. Consequently, a crucial step for improving the knowledge of those relationships is to determine if pathogens or plant growing season influence endophytic bacterial diversity and dynamic. Four healthy, four phytoplasma diseased and four recovered (symptomatic plants that spontaneously regain a healthy condition) grapevine plants were sampled monthly from June to October 2010 in a vineyard in north-western Italy. Metagenomic DNA was extracted from sterilized leaves and the endophytic bacterial community dynamic and diversity were analyzed by taxon specific real-time PCR, Length-Heterogeneity PCR and genus-specific PCR. These analyses revealed that both sampling date and phytoplasma infection influenced the endophytic bacterial composition. Interestingly, in June, when the plants are symptomless and the pathogen is undetectable (i) the endophytic bacterial community associated with diseased grapevines was different from those in the other sampling dates, when the phytoplasmas are detectable inside samples; (ii) the microbial community associated with recovered plants differs from that living inside healthy and diseased plants. Interestingly, LH-PCR database identified bacteria previously reported as biocontrol agents in the examined grapevines. Of these, Burkholderia, Methylobacterium and Pantoea dynamic was influenced by the phytoplasma infection process and seasonality. Results indicated that endophytic bacterial community composition in grapevine is correlated to both phytoplasma infection and sampling date. For the first time, data underlined that, in diseased plants, the pathogen infection process can decrease the impact of seasonality on community dynamic. Moreover, based on experimental evidences, it was reasonable to hypothesize that after recovery the restructured

  6. Influenza virus drug resistance: a time-sampled population genetics perspective.

    Directory of Open Access Journals (Sweden)

    Matthieu Foll

    2014-02-01

    Full Text Available The challenge of distinguishing genetic drift from selection remains a central focus of population genetics. Time-sampled data may provide a powerful tool for distinguishing these processes, and we here propose approximate Bayesian, maximum likelihood, and analytical methods for the inference of demography and selection from time course data. Utilizing these novel statistical and computational tools, we evaluate whole-genome datasets of an influenza A H1N1 strain in the presence and absence of oseltamivir (an inhibitor of neuraminidase collected at thirteen time points. Results reveal a striking consistency amongst the three estimation procedures developed, showing strongly increased selection pressure in the presence of drug treatment. Importantly, these approaches re-identify the known oseltamivir resistance site, successfully validating the approaches used. Enticingly, a number of previously unknown variants have also been identified as being positively selected. Results are interpreted in the light of Fisher's Geometric Model, allowing for a quantification of the increased distance to optimum exerted by the presence of drug, and theoretical predictions regarding the distribution of beneficial fitness effects of contending mutations are empirically tested. Further, given the fit to expectations of the Geometric Model, results suggest the ability to predict certain aspects of viral evolution in response to changing host environments and novel selective pressures.

  7. Eigenvalue sensitivity of sampled time systems operating in closed loop

    Science.gov (United States)

    Bernal, Dionisio

    2018-05-01

    The use of feedback to create closed-loop eigenstructures with high sensitivity has received some attention in the Structural Health Monitoring field. Although practical implementation is necessarily digital, and thus in sampled time, work thus far has center on the continuous time framework, both in design and in checking performance. It is shown in this paper that the performance in discrete time, at typical sampling rates, can differ notably from that anticipated in the continuous time formulation and that discrepancies can be particularly large on the real part of the eigenvalue sensitivities; a consequence being important error on the (linear estimate) of the level of damage at which closed-loop stability is lost. As one anticipates, explicit consideration of the sampling rate poses no special difficulties in the closed-loop eigenstructure design and the relevant expressions are developed in the paper, including a formula for the efficient evaluation of the derivative of the matrix exponential based on the theory of complex perturbations. The paper presents an easily reproduced numerical example showing the level of error that can result when the discrete time implementation of the controller is not considered.

  8. Time processing in dyscalculia

    Directory of Open Access Journals (Sweden)

    marinella eCappelletti

    2011-12-01

    Full Text Available To test whether atypical number development may affect other types of quantity processing, we investigated temporal discrimination in adults with developmental dyscalculia (DD. This also allowed us to test whether (1 number and time may be sub-served by a common quantity system or decision mechanisms –in which case they may both be impaired, or (2 whether number and time are distinct –and therefore they may dissociate. Participants judged which of two successively presented horizontal lines was longer in duration, the first line being preceded by either a small or a large number prime (‘1’ or ‘9’ or by a neutral symbol (‘#’, or in third task decide which of two Arabic numbers (either ‘1’, ‘5’, ’9’ lasted longer. Results showed that (i DD’s temporal discriminability was normal as long as numbers were not part of the experimental design even as task-irrelevant stimuli; however (ii task-irrelevant numbers dramatically disrupted DD’s temporal discriminability, the more their salience increased, though the actual magnitude of the numbers had no effect; and in contrast (iii controls’ time perception was robust to the presence of numbers but modulated by numerical quantity such that small number primes or numerical stimuli made durations appear shorter than veridical and the opposite for larger numerical prime or numerical stimuli. This study is the first to investigate continuous quantity as time in a population with a congenital number impairment and to show that atypical development of numerical competence leaves continuous quantity processing spared. Our data support the idea of a partially shared quantity system across numerical and temporal dimensions, which allows dissociations and interactions among dimensions; furthermore, they suggest that impaired number in DD is unlikely to originate from systems initially dedicated to continuous quantity processing like time.

  9. Weighted statistical parameters for irregularly sampled time series

    Science.gov (United States)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  10. Time Series, Stochastic Processes and Completeness of Quantum Theory

    International Nuclear Information System (INIS)

    Kupczynski, Marian

    2011-01-01

    Most of physical experiments are usually described as repeated measurements of some random variables. Experimental data registered by on-line computers form time series of outcomes. The frequencies of different outcomes are compared with the probabilities provided by the algorithms of quantum theory (QT). In spite of statistical predictions of QT a claim was made that it provided the most complete description of the data and of the underlying physical phenomena. This claim could be easily rejected if some fine structures, averaged out in the standard descriptive statistical analysis, were found in time series of experimental data. To search for these structures one has to use more subtle statistical tools which were developed to study time series produced by various stochastic processes. In this talk we review some of these tools. As an example we show how the standard descriptive statistical analysis of the data is unable to reveal a fine structure in a simulated sample of AR (2) stochastic process. We emphasize once again that the violation of Bell inequalities gives no information on the completeness or the non locality of QT. The appropriate way to test the completeness of quantum theory is to search for fine structures in time series of the experimental data by means of the purity tests or by studying the autocorrelation and partial autocorrelation functions.

  11. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  12. In Situ Visualization of the Phase Behavior of Oil Samples Under Refinery Process Conditions.

    Science.gov (United States)

    Laborde-Boutet, Cedric; McCaffrey, William C

    2017-02-21

    To help address production issues in refineries caused by the fouling of process units and lines, we have developed a setup as well as a method to visualize the behavior of petroleum samples under process conditions. The experimental setup relies on a custom-built micro-reactor fitted with a sapphire window at the bottom, which is placed over the objective of an inverted microscope equipped with a cross-polarizer module. Using reflection microscopy enables the visualization of opaque samples, such as petroleum vacuum residues, or asphaltenes. The combination of the sapphire window from the micro-reactor with the cross-polarizer module of the microscope on the light path allows high-contrast imaging of isotropic and anisotropic media. While observations are carried out, the micro-reactor can be heated to the temperature range of cracking reactions (up to 450 °C), can be subjected to H2 pressure relevant to hydroconversion reactions (up to 16 MPa), and can stir the sample by magnetic coupling. Observations are typically carried out by taking snapshots of the sample under cross-polarized light at regular time intervals. Image analyses may not only provide information on the temperature, pressure, and reactive conditions yielding phase separation, but may also give an estimate of the evolution of the chemical (absorption/reflection spectra) and physical (refractive index) properties of the sample before the onset of phase separation.

  13. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Rath, N., E-mail: Nikolaus@rath.org; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q. [Department of Applied Physics and Applied Mathematics, Columbia University, 500 W 120th St, New York, New York 10027 (United States); Kato, S. [Department of Information Engineering, Nagoya University, Nagoya (Japan)

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  14. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    International Nuclear Information System (INIS)

    Rath, N.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-01-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules

  15. Located actions in process algebra with timing

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.

    2004-01-01

    We propose a process algebra obtained by adapting the process algebra with continuous relative timing from Baeten and Middelburg [Process Algebra with Timing, Springer, 2002, Chap. 4] to spatially located actions. This process algebra makes it possible to deal with the behaviour of systems with a

  16. Adaptive multiple importance sampling for Gaussian processes

    Czech Academy of Sciences Publication Activity Database

    Xiong, X.; Šmídl, Václav; Filippone, M.

    2017-01-01

    Roč. 87, č. 8 (2017), s. 1644-1665 ISSN 0094-9655 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Gaussian Process * Bayesian estimation * Adaptive importance sampling Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/smidl-0469804.pdf

  17. Degradation of hydrocarbons in soil samples analyzed within accepted analytical holding times

    International Nuclear Information System (INIS)

    Jackson, J.; Thomey, N.; Dietlein, L.F.

    1992-01-01

    Samples which are collected in conjunction with subsurface investigations at leaking petroleum storage tank sites and petroleum refineries are routinely analyzed for benzene, toluene, ethylbenzene, xylenes (BTEX), and total petroleum hydrocarbons (TPH). Water samples are preserved by the addition of hydrochloric acid and maintained at four degrees centigrade prior to analysis. This is done to prevent bacterial degradation of hydrocarbons. Chemical preservation is not presently performed on soil samples. Instead, the samples are cooled and maintained at four degrees centigrade. This study was done to measure the degree of degradation of hydrocarbons in soil samples which are analyzed within accepted holding times. Soil samples were collected and representative subsamples were prepared from the initial sample. Subsamples were analyzed in triplicate for BTEX and TPH throughout the length of the approved holding times to measure the extent of sample constituent degradation prior to analysis. Findings imply that for sandy soils, BTEX and TPH concentrations can be highly dependent upon the length of time which elapses between sample collection and analysis

  18. ACTINIDE REMOVAL PROCESS SAMPLE ANALYSIS, CHEMICAL MODELING, AND FILTRATION EVALUATION

    Energy Technology Data Exchange (ETDEWEB)

    Martino, C.; Herman, D.; Pike, J.; Peters, T.

    2014-06-05

    Filtration within the Actinide Removal Process (ARP) currently limits the throughput in interim salt processing at the Savannah River Site. In this process, batches of salt solution with Monosodium Titanate (MST) sorbent are concentrated by crossflow filtration. The filtrate is subsequently processed to remove cesium in the Modular Caustic Side Solvent Extraction Unit (MCU) followed by disposal in saltstone grout. The concentrated MST slurry is washed and sent to the Defense Waste Processing Facility (DWPF) for vitrification. During recent ARP processing, there has been a degradation of filter performance manifested as the inability to maintain high filtrate flux throughout a multi-batch cycle. The objectives of this effort were to characterize the feed streams, to determine if solids (in addition to MST) are precipitating and causing the degraded performance of the filters, and to assess the particle size and rheological data to address potential filtration impacts. Equilibrium modelling with OLI Analyzer{sup TM} and OLI ESP{sup TM} was performed to determine chemical components at risk of precipitation and to simulate the ARP process. The performance of ARP filtration was evaluated to review potential causes of the observed filter behavior. Task activities for this study included extensive physical and chemical analysis of samples from the Late Wash Pump Tank (LWPT) and the Late Wash Hold Tank (LWHT) within ARP as well as samples of the tank farm feed from Tank 49H. The samples from the LWPT and LWHT were obtained from several stages of processing of Salt Batch 6D, Cycle 6, Batch 16.

  19. Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.

    Science.gov (United States)

    Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2018-01-24

    Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.

  20. Sample path analysis and distributions of boundary crossing times

    CERN Document Server

    Zacks, Shelemyahu

    2017-01-01

    This monograph is focused on the derivations of exact distributions of first boundary crossing times of Poisson processes, compound Poisson processes, and more general renewal processes.  The content is limited to the distributions of first boundary crossing times and their applications to various stochastic models. This book provides the theory and techniques for exact computations of distributions and moments of level crossing times. In addition, these techniques could replace simulations in many cases, thus providing more insight about the phenomenona studied. This book takes a general approach for studying telegraph processes and is based on nearly thirty published papers by the author and collaborators over the past twenty five years.  No prior knowledge of advanced probability is required, making the book widely available to students and researchers in applied probability, operations research, applied physics, and applied mathematics. .

  1. It's about time: revisiting temporal processing deficits in dyslexia.

    Science.gov (United States)

    Casini, Laurence; Pech-Georgel, Catherine; Ziegler, Johannes C

    2018-03-01

    Temporal processing in French children with dyslexia was evaluated in three tasks: a word identification task requiring implicit temporal processing, and two explicit temporal bisection tasks, one in the auditory and one in the visual modality. Normally developing children matched on chronological age and reading level served as a control group. Children with dyslexia exhibited robust deficits in temporal tasks whether they were explicit or implicit and whether they involved the auditory or the visual modality. First, they presented larger perceptual variability when performing temporal tasks, whereas they showed no such difficulties when performing the same task on a non-temporal dimension (intensity). This dissociation suggests that their difficulties were specific to temporal processing and could not be attributed to lapses of attention, reduced alertness, faulty anchoring, or overall noisy processing. In the framework of cognitive models of time perception, these data point to a dysfunction of the 'internal clock' of dyslexic children. These results are broadly compatible with the recent temporal sampling theory of dyslexia. © 2017 John Wiley & Sons Ltd.

  2. Sample vial inserts: A better approach for sampling heterogeneous slurry samples in the SRS Defense Waste Processing Facility

    International Nuclear Information System (INIS)

    Coleman, C.J.; Goode, S.R.

    1996-01-01

    A convenient and effective new approach for analyzing DWPF samples involves the use of inserts with volumes of 1.5--3 ml placed in the neck of 14 ml sample vials. The inserts have rims that conform to the rim of the vials so that they sit straight and stable in the vial. The DWPF tank sampling system fills the pre-weighed insert rather than the entire vial, so the vial functions only as the insert holder. The shielded cell operator then removes the vial cap and decants the insert containing the sample into a plastic bottle, crucible, etc., for analysis. Inert materials such as Teflon, plastic, and zirconium are used for the insert so it is unnecessary to separate the insert from the sample for most analyses. The key technique advantage of using inserts to take DWPF samples versus filling sample vials is that it provides a convenient and almost foolproof way of obtaining and handling small volumes of slurry samples in a shielded cell without corrupting the sample. Since the insert allows the entire sample to be analyzed, this approach eliminates the errors inherent with subsampling heterogeneous slurries that comprise DWPF samples. Slurry samples can then be analyzed with confidence. Analysis times are dramatically reduced by eliminating the drying and vitrification steps normally used to produce a homogeneous solid sample. Direct dissolution and elemental analysis of slurry samples are achieved in 8 hours or less compared with 40 hours for analysis of vitrified slurry samples. Comparison of samples taken in inserts versus full vials indicate that the insert does not significantly affect sample composition

  3. Design aspects of automation system for initial processing of fecal samples

    International Nuclear Information System (INIS)

    Sawant, Pramilla D.; Prabhu, Supreetha P.; Suja, A.; Wankhede, Sonal; Chaudhary, Seema; Rao, D.D.; Pradeepkumar, K.S.; Das, A.P.; Badodkar, B.D.

    2014-01-01

    The procedure for initial handling of the fecal samples at Bioassay Lab., Trombay is as follows: overnight fecal samples are collected from the worker in a kit consisting of a polythene bag placed in a wide mouth polythene container closed with an inner lid and a screw cap. Occupational worker collects the sample in the polythene bag. On receiving the sample, the polythene container along with the sample is weighed, polythene bag containing fecal sample is lifted out of the container using a pair of tongs placed inside a crucible and ashed inside a muffle furnace at 450℃. After complete ashing, the crucible containing white ash is taken-up for further radiochemical processing. This paper describes the various steps in developing a prototype automated system for initial handling of fecal samples. The proposed system for handling and processing of fecal samples is proposed to automate the above. The system once developed will help eliminate manual intervention till the ashing stage and reduce the biological hazard involved in handling such samples mentioned procedure

  4. Learning process mapping heuristics under stochastic sampling overheads

    Science.gov (United States)

    Ieumwananonthachai, Arthur; Wah, Benjamin W.

    1991-01-01

    A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.

  5. A confirmatory holding time study for purgeable VOCs in water samples

    International Nuclear Information System (INIS)

    West, O.R.; Bayne, C.K.; Siegrist, R.L.; Holden, W.H.; Bottrell, D.W.

    1996-01-01

    Analyte stability during pre-analytical storage is essential to the accurate quantification contaminants in environmental samples. This is particularly true for volatile organic compounds (VOCS) which can easily volatilize and/or degrade during sample storage. Recognizing this, regulatory agencies require water samples be collected in vials without headspace and stored at 4 degrees C, and that analyses be conducted within 14 days, 2048 even if samples are acid-preserved. Since the selection of a 14-day holding time was largely arbitrary, the appropriateness of this requirement must be re-evaluated. The goal of the study described here was to provide regulatory agencies with the necessary data to extend the maximum holding time for properly preserved VOC water samples to 28 days

  6. Study of building materials impregnation processes by quasi-real-time neutron radiography

    International Nuclear Information System (INIS)

    Nemec, T.; Rant, J.; Apih, V.; Glumac, B.

    1999-01-01

    Neutron radiography (NR) is a useful non-destructive method for determination of hydrogen content in various building and technical materials. Monitoring of transport processes of moisture and hydrogenous liquids in porous building materials is enabled by fast, quasi-real-time NR methods based on novel imaging plate neutron detectors (IP-NDs). Hydrogen content in the samples is determined by quantitative analysis of measured profiles of neutron attenuation in the samples. Detailed description of quantitative NR method is presented by the authors in another accompanying contribution at this conference. Deterioration of building materials is originated by different processes that all require presence of water therefore it is essential to limit or prevent the transport of water through the porous material. In this presentation, results of a study of clay brick impregnation by silicone based hydrophobic agents will be presented. Quantitative results obtained by NR imaging successfully explained the processes that occur during the impregnation of porous materials. Efficiency of hydrophobic treatment was quantitatively evaluated

  7. On incomplete sampling under birth-death models and connections to the sampling-based coalescent.

    Science.gov (United States)

    Stadler, Tanja

    2009-11-07

    The constant rate birth-death process is used as a stochastic model for many biological systems, for example phylogenies or disease transmission. As the biological data are usually not fully available, it is crucial to understand the effect of incomplete sampling. In this paper, we analyze the constant rate birth-death process with incomplete sampling. We derive the density of the bifurcation events for trees on n leaves which evolved under this birth-death-sampling process. This density is used for calculating prior distributions in Bayesian inference programs and for efficiently simulating trees. We show that the birth-death-sampling process can be interpreted as a birth-death process with reduced rates and complete sampling. This shows that joint inference of birth rate, death rate and sampling probability is not possible. The birth-death-sampling process is compared to the sampling-based population genetics model, the coalescent. It is shown that despite many similarities between these two models, the distribution of bifurcation times remains different even in the case of very large population sizes. We illustrate these findings on an Hepatitis C virus dataset from Egypt. We show that the transmission times estimates are significantly different-the widely used Gamma statistic even changes its sign from negative to positive when switching from the coalescent to the birth-death process.

  8. A new formulation of the linear sampling method: spatial resolution and post-processing

    International Nuclear Information System (INIS)

    Piana, M; Aramini, R; Brignone, M; Coyle, J

    2008-01-01

    A new formulation of the linear sampling method is described, which requires the regularized solution of a single functional equation set in a direct sum of L 2 spaces. This new approach presents the following notable advantages: it is computationally more effective than the traditional implementation, since time consuming samplings of the Tikhonov minimum problem and of the generalized discrepancy equation are avoided; it allows a quantitative estimate of the spatial resolution achievable by the method; it facilitates a post-processing procedure for the optimal selection of the scatterer profile by means of edge detection techniques. The formulation is described in a two-dimensional framework and in the case of obstacle scattering, although generalizations to three dimensions and penetrable inhomogeneities are straightforward

  9. Residence time modeling of hot melt extrusion processes.

    Science.gov (United States)

    Reitz, Elena; Podhaisky, Helmut; Ely, David; Thommes, Markus

    2013-11-01

    The hot melt extrusion process is a widespread technique to mix viscous melts. The residence time of material in the process frequently determines the product properties. An experimental setup and a corresponding mathematical model were developed to evaluate residence time and residence time distribution in twin screw extrusion processes. The extrusion process was modeled as the convolution of a mass transport process described by a Gaussian probability function, and a mixing process represented by an exponential function. The residence time of the extrusion process was determined by introducing a tracer at the extruder inlet and measuring the tracer concentration at the die. These concentrations were fitted to the residence time model, and an adequate correlation was found. Different parameters were derived to characterize the extrusion process including the dead time, the apparent mixing volume, and a transport related axial mixing. A 2(3) design of experiments was performed to evaluate the effect of powder feed rate, screw speed, and melt viscosity of the material on the residence time. All three parameters affect the residence time of material in the extruder. In conclusion, a residence time model was developed to interpret experimental data and to get insights into the hot melt extrusion process. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  11. Relative entropy and waiting time for continuous-time Markov processes

    NARCIS (Netherlands)

    Chazottes, J.R.; Giardinà, C.; Redig, F.H.J.

    2006-01-01

    For discrete-time stochastic processes, there is a close connection between return (resp. waiting) times and entropy (resp. relative entropy). Such a connection cannot be straightforwardly extended to the continuous-time setting. Contrarily to the discrete-time case one needs a reference measure on

  12. An Improved Phase Gradient Autofocus Algorithm Used in Real-time Processing

    Directory of Open Access Journals (Sweden)

    Qing Ji-ming

    2015-10-01

    Full Text Available The Phase Gradient Autofocus (PGA algorithm can remove the high order phase error effectively, which is of great significance to get high resolution images in real-time processing. While PGA usually needs iteration, which necessitates long working hours. In addition, the performances of the algorithm are not stable in different scene applications. This severely constrains the application of PGA in real-time processing. Isolated scatter selection and windowing are two important algorithmic steps of Phase Gradient Autofocus Algorithm. Therefore, this paper presents an isolated scatter selection method based on sample mean and a windowing method based on pulse envelope. These two methods are highly adaptable to data, which would make the algorithm obtain better stability and need less iteration. The adaptability of the improved PGA is demonstrated with the experimental results of real radar data.

  13. The significance of sampling time in therapeutic drug monitoring of clozapine

    DEFF Research Database (Denmark)

    Jakobsen, M I; Larsen, J R; Svensson, C K

    2017-01-01

    OBJECTIVE: Therapeutic drug monitoring (TDM) of clozapine is standardized to 12-h postdose samplings. In clinical settings, sampling time often deviates from this time point, although the importance of the deviation is unknown. To this end, serum concentrations (s-) of clozapine and its metabolite...... N-desmethyl-clozapine (norclozapine) were measured at 12 ± 1 and 2 h postdose. METHOD: Forty-six patients with a diagnosis of schizophrenia, and on stable clozapine treatment, were enrolled for hourly, venous blood sampling at 10-14 h postdose. RESULTS: Minor changes in median percentage values were...

  14. Influence of volume of sample processed on detection of Chlamydia trachomatis in urogenital samples by PCR

    NARCIS (Netherlands)

    Goessens, W H; Kluytmans, J A; den Toom, N; van Rijsoort-Vos, T H; Niesters, B G; Stolz, E; Verbrugh, H A; Quint, W G

    In the present study, it was demonstrated that the sensitivity of the PCR for the detection of Chlamydia trachomatis is influenced by the volume of the clinical sample which is processed in the PCR. An adequate sensitivity for PCR was established by processing at least 4%, i.e., 80 microliters, of

  15. Data Validation Package May 2016 Groundwater Sampling at the Lakeview, Oregon, Processing Site August 2016

    Energy Technology Data Exchange (ETDEWEB)

    Linard, Joshua [USDOE Office of Legacy Management, Washington, DC (United States); Hall, Steve [Navarro Research and Engineering, Inc., Oak Ridge, TN (United States)

    2016-08-01

    This biennial event includes sampling five groundwater locations (four monitoring wells and one domestic well) at the Lakeview, Oregon, Processing Site. For this event, the domestic well (location 0543) could not be sampled because no one was in residence during the sampling event (Note: notification was provided to the resident prior to the event). Per Appendix A of the Groundwater Compliance Action Plan, sampling is conducted to monitor groundwater quality on a voluntary basis. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated). One duplicate sample was collected from location 0505. Water levels were measured at each sampled monitoring well. The constituents monitored at the Lakeview site are manganese and sulfate. Monitoring locations that exceeded the U.S. Environmental Protection Agency (EPA) Secondary Maximum Contaminant Levels for these constituents are listed in Table 1. Review of time-concentration graphs included in this report indicate that manganese and sulfate concentrations are consistent with historical measurements.

  16. Sampling Operations on Big Data

    Science.gov (United States)

    2015-11-29

    gories. These include edge sampling methods where edges are selected by a predetermined criteria; snowball sampling methods where algorithms start... Sampling Operations on Big Data Vijay Gadepally, Taylor Herr, Luke Johnson, Lauren Milechin, Maja Milosavljevic, Benjamin A. Miller Lincoln...process and disseminate information for discovery and exploration under real-time constraints. Common signal processing operations such as sampling and

  17. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  18. Metrology Sampling Strategies for Process Monitoring Applications

    KAUST Repository

    Vincent, Tyrone L.

    2011-11-01

    Shrinking process windows in very large scale integration semiconductor manufacturing have already necessitated the development of control systems capable of addressing sub-lot-level variation. Within-wafer control is the next milestone in the evolution of advanced process control from lot-based and wafer-based control. In order to adequately comprehend and control within-wafer spatial variation, inline measurements must be performed at multiple locations across the wafer. At the same time, economic pressures prompt a reduction in metrology, for both capital and cycle-time reasons. This paper explores the use of modeling and minimum-variance prediction as a method to select the sites for measurement on each wafer. The models are developed using the standard statistical tools of principle component analysis and canonical correlation analysis. The proposed selection method is validated using real manufacturing data, and results indicate that it is possible to significantly reduce the number of measurements with little loss in the information obtained for the process control systems. © 2011 IEEE.

  19. Development and Validation of a Real-Time PCR Assay for Rapid Detection of Candida auris from Surveillance Samples.

    Science.gov (United States)

    Leach, L; Zhu, Y; Chaturvedi, S

    2018-02-01

    Candida auris is an emerging multidrug-resistant yeast causing invasive health care-associated infection with high mortality worldwide. Rapid identification of C. auris is of primary importance for the implementation of public health measures to control the spread of infection. To achieve these goals, we developed and validated a TaqMan-based real-time PCR assay targeting the internal transcribed spacer 2 ( ITS 2) region of the ribosomal gene. The assay was highly specific, reproducible, and sensitive, with the detection limit of 1 C. auris CFU/PCR. The performance of the C. auris real-time PCR assay was evaluated by using 623 surveillance samples, including 365 patient swabs and 258 environmental sponges. Real-time PCR yielded positive results from 49 swab and 58 sponge samples, with 89% and 100% clinical sensitivity with regard to their respective culture-positive results. The real-time PCR also detected C. auris DNA from 1% and 12% of swab and sponge samples with culture-negative results, indicating the presence of dead or culture-impaired C. auris The real-time PCR yielded results within 4 h of sample processing, compared to 4 to 14 days for culture, reducing turnaround time significantly. The new real-time PCR assay allows for accurate and rapid screening of C. auris and can increase effective control and prevention of this emerging multidrug-resistant fungal pathogen in health care facilities. Copyright © 2018 Leach et al.

  20. Sampling methods for rumen microbial counts by Real-Time PCR techniques

    Directory of Open Access Journals (Sweden)

    S. Puppo

    2010-02-01

    Full Text Available Fresh rumen samples were withdrawn from 4 cannulated buffalo females fed a fibrous diets in order to quantify bacteria concentration in the rumen by Real-Time PCR techniques. To obtain DNA of a good quality from whole rumen fluid, eight (M1-M8 different pre-filtration methods (cheese cloths, glass-fibre and nylon filter in combination with various centrifugation speeds (1000, 5000 and 14,000 rpm were tested. Genomic DNA extraction was performed either on fresh or frozen samples (-20°C. The quantitative bacteria analysis was realized according to Real-Time PCR procedure for Butyrivibrio fibrisolvens reported in literature. M5 resulted the best sampling procedure allowing to obtain a suitable genomic DNA. No differences were revealed between fresh and frozen samples.

  1. Effect of Sampling Frequency for Real-Time Tablet Coating Monitoring Using Near Infrared Spectroscopy.

    Science.gov (United States)

    Igne, Benoît; Arai, Hiroaki; Drennen, James K; Anderson, Carl A

    2016-09-01

    While the sampling of pharmaceutical products typically follows well-defined protocols, the parameterization of spectroscopic methods and their associated sampling frequency is not standard. Whereas, for blending, the sampling frequency is limited by the nature of the process, in other processes, such as tablet film coating, practitioners must determine the best approach to collecting spectral data. The present article studied how sampling practices affected the interpretation of the results provided by a near-infrared spectroscopy method for the monitoring of tablet moisture and coating weight gain during a pan-coating experiment. Several coating runs were monitored with different sampling frequencies (with or without co-adds (also known as sub-samples)) and with spectral averaging corresponding to processing cycles (1 to 15 pan rotations). Beyond integrating the sensor into the equipment, the present work demonstrated that it is necessary to have a good sense of the underlying phenomena that have the potential to affect the quality of the signal. The effects of co-adds and averaging was significant with respect to the quality of the spectral data. However, the type of output obtained from a sampling method dictated the type of information that one can gain on the dynamics of a process. Thus, different sampling frequencies may be needed at different stages of process development. © The Author(s) 2016.

  2. High speed real-time wavefront processing system for a solid-state laser system

    Science.gov (United States)

    Liu, Yuan; Yang, Ping; Chen, Shanqiu; Ma, Lifang; Xu, Bing

    2008-03-01

    A high speed real-time wavefront processing system for a solid-state laser beam cleanup system has been built. This system consists of a core2 Industrial PC (IPC) using Linux and real-time Linux (RT-Linux) operation system (OS), a PCI image grabber, a D/A card. More often than not, the phase aberrations of the output beam from solid-state lasers vary fast with intracavity thermal effects and environmental influence. To compensate the phase aberrations of solid-state lasers successfully, a high speed real-time wavefront processing system is presented. Compared to former systems, this system can improve the speed efficiently. In the new system, the acquisition of image data, the output of control voltage data and the implementation of reconstructor control algorithm are treated as real-time tasks in kernel-space, the display of wavefront information and man-machine conversation are treated as non real-time tasks in user-space. The parallel processing of real-time tasks in Symmetric Multi Processors (SMP) mode is the main strategy of improving the speed. In this paper, the performance and efficiency of this wavefront processing system are analyzed. The opened-loop experimental results show that the sampling frequency of this system is up to 3300Hz, and this system can well deal with phase aberrations from solid-state lasers.

  3. Bayesian Travel Time Inversion adopting Gaussian Process Regression

    Science.gov (United States)

    Mauerberger, S.; Holschneider, M.

    2017-12-01

    A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.

  4. Evaluation of standard methods for collecting and processing fuel moisture samples

    Science.gov (United States)

    Sally M. Haase; José Sánchez; David R. Weise

    2016-01-01

    A variety of techniques for collecting and processing samples to determine moisture content of wildland fuels in support of fire management activities were evaluated. The effects of using a chainsaw or handsaw to collect samples of largediameter wood, containers for storing and transporting collected samples, and quick-response ovens for estimating moisture content...

  5. A review of blood sample handling and pre-processing for metabolomics studies.

    Science.gov (United States)

    Hernandes, Vinicius Veri; Barbas, Coral; Dudzik, Danuta

    2017-09-01

    Metabolomics has been found to be applicable to a wide range of clinical studies, bringing a new era for improving clinical diagnostics, early disease detection, therapy prediction and treatment efficiency monitoring. A major challenge in metabolomics, particularly untargeted studies, is the extremely diverse and complex nature of biological specimens. Despite great advances in the field there still exist fundamental needs for considering pre-analytical variability that can introduce bias to the subsequent analytical process and decrease the reliability of the results and moreover confound final research outcomes. Many researchers are mainly focused on the instrumental aspects of the biomarker discovery process, and sample related variables sometimes seem to be overlooked. To bridge the gap, critical information and standardized protocols regarding experimental design and sample handling and pre-processing are highly desired. Characterization of a range variation among sample collection methods is necessary to prevent results misinterpretation and to ensure that observed differences are not due to an experimental bias caused by inconsistencies in sample processing. Herein, a systematic discussion of pre-analytical variables affecting metabolomics studies based on blood derived samples is performed. Furthermore, we provide a set of recommendations concerning experimental design, collection, pre-processing procedures and storage conditions as a practical review that can guide and serve for the standardization of protocols and reduction of undesirable variation. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. High peak power processing up to 100 MV/M on various metallic samples

    International Nuclear Information System (INIS)

    Luong, M.; Bonin, B.; Safa, H.

    1996-01-01

    The high peak power processing (HPPP) is a well established way to reduce electronic field emission from radiofrequency (RF) metallic surfaces. The processing occurs because of some kind of instability destroys the emitter, but the basic physical mechanism at work has not yet been clearly identified. RF processing experiments on samples of restricted area, are described with well localized artificial emitting sites (protrusions from scratches on the sample surface). In order to disentangle the role of thermal and mechanical effects, in the processing, the samples were made from metals with different melting temperatures and tensile strengths. (author)

  7. High peak power processing up to 100 MV/m on various metallic samples

    International Nuclear Information System (INIS)

    Luong, M.; Bonin, B.; Safa, H.; Le Goff, A.

    1996-01-01

    The high peak power processing (HPPP) is a well established way to reduce electronic field emission from radiofrequency (RF) metallic surfaces. The processing occurs because of some kind of instability destroys the emitter, but the basic physical mechanism at work has not yet been clearly identified. The present study describes RF processing experiments on samples of restricted area, with well localized artificial emitting sites (protrusions from scratches on the sample surface). In order to disentangle the role of thermal and mechanical effects in the processing, the samples were made from metals with different melting temperatures and tensile strengths. (author)

  8. Rare behavior of growth processes via umbrella sampling of trajectories

    Science.gov (United States)

    Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen

    2018-03-01

    We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.

  9. Corganiser: a web-based software tool for planning time-sensitive sampling of whole rounds during scientific drilling

    DEFF Research Database (Denmark)

    Marshall, Ian

    2014-01-01

    with a wide range of core and section configurations and can thus be used in future drilling projects. Corganiser is written in the Python programming language and is implemented both as a graphical web interface and command-line interface. It can be accessed online at http://130.226.247.137/.......Corganiser is a software tool developed to simplify the process of preparing whole-round sampling plans for time-sensitive microbiology and geochemistry sampling during scientific drilling. It was developed during the Integrated Ocean Drilling Program (IODP) Expedition 347, but is designed to work...

  10. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks.

    Science.gov (United States)

    Vestergaard, Christian L; Génois, Mathieu

    2015-10-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling.

  11. Application of CRAFT (complete reduction to amplitude frequency table) in nonuniformly sampled (NUS) 2D NMR data processing.

    Science.gov (United States)

    Krishnamurthy, Krish; Hari, Natarajan

    2017-09-15

    The recently published CRAFT (complete reduction to amplitude frequency table) technique converts the raw FID data (i.e., time domain data) into a table of frequencies, amplitudes, decay rate constants, and phases. It offers an alternate approach to decimate time-domain data, with minimal preprocessing step. It has been shown that application of CRAFT technique to process the t 1 dimension of the 2D data significantly improved the detectable resolution by its ability to analyze without the use of ubiquitous apodization of extensively zero-filled data. It was noted earlier that CRAFT did not resolve sinusoids that were not already resolvable in time-domain (i.e., t 1 max dependent resolution). We present a combined NUS-IST-CRAFT approach wherein the NUS acquisition technique (sparse sampling technique) increases the intrinsic resolution in time-domain (by increasing t 1 max), IST fills the gap in the sparse sampling, and CRAFT processing extracts the information without loss due to any severe apodization. NUS and CRAFT are thus complementary techniques to improve intrinsic and usable resolution. We show that significant improvement can be achieved with this combination over conventional NUS-IST processing. With reasonable sensitivity, the models can be extended to significantly higher t 1 max to generate an indirect-DEPT spectrum that rivals the direct observe counterpart. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  13. Timed Comparisons of Semi-Markov Processes

    DEFF Research Database (Denmark)

    Pedersen, Mathias Ruggaard; Larsen, Kim Guldstrand; Bacci, Giorgio

    2018-01-01

    -Markov processes, and investigate the question of how to compare two semi-Markov processes with respect to their time-dependent behaviour. To this end, we introduce the relation of being “faster than” between processes and study its algorithmic complexity. Through a connection to probabilistic automata we obtain...

  14. 32 CFR 806.27 - Samples of Air Force FOIA processing documents.

    Science.gov (United States)

    2010-07-01

    ... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.27 Samples of Air Force FOIA processing... determination within 20 workdays, we have instituted multitrack processing of requests. Based on the information... source; responsive records were part of the Air Force's decision-making process, and the prerelease...

  15. Processing scarce biological samples for light and transmission electron microscopy

    Directory of Open Access Journals (Sweden)

    P Taupin

    2008-06-01

    Full Text Available Light microscopy (LM and transmission electron microscopy (TEM aim at understanding the relationship structure-function. With advances in biology, isolation and purification of scarce populations of cells or subcellular structures may not lead to enough biological material, for processing for LM and TEM. A protocol for preparation of scarce biological samples is presented. It is based on pre-embedding the biological samples, suspensions or pellets, in bovine serum albumin (BSA and bis-acrylamide (BA, cross-linked and polymerized. This preparation provides a simple and reproducible technique to process biological materials, present in limited quantities that can not be amplified, for light and transmission electron microscopy.

  16. Finite element simulation of the T-shaped ECAP processing of round samples

    Science.gov (United States)

    Shaban Ghazani, Mehdi; Fardi-Ilkhchy, Ali; Binesh, Behzad

    2018-05-01

    Grain refinement is the only mechanism that increases the yield strength and toughness of the materials simultaneously. Severe plastic deformation is one of the promising methods to refine the microstructure of materials. Among different severe plastic deformation processes, the T-shaped equal channel angular pressing (T-ECAP) is a relatively new technique. In the present study, finite element analysis was conducted to evaluate the deformation behavior of metals during T-ECAP process. The study was focused mainly on flow characteristics, plastic strain distribution and its homogeneity, damage development, and pressing force which are among the most important factors governing the sound and successful processing of nanostructured materials by severe plastic deformation techniques. The results showed that plastic strain is localized in the bottom side of sample and uniform deformation cannot be possible using T-ECAP processing. Friction coefficient between sample and die channel wall has a little effect on strain distributions in mirror plane and transverse plane of deformed sample. Also, damage analysis showed that superficial cracks may be initiated from bottom side of sample and their propagation will be limited due to the compressive state of stress. It was demonstrated that the V shaped deformation zone are existed in T-ECAP process and the pressing load needed for execution of deformation process is increased with friction.

  17. Solar Ion Processing of Itokawa Grains: Reconciling Model Predictions with Sample Observations

    Science.gov (United States)

    Christoffersen, Roy; Keller, L. P.

    2014-01-01

    Analytical TEM observations of Itokawa grains reported to date show complex solar wind ion processing effects in the outer 30-100 nm of pyroxene and olivine grains. The effects include loss of long-range structural order, formation of isolated interval cavities or "bubbles", and other nanoscale compositional/microstructural variations. None of the effects so far described have, however, included complete ion-induced amorphization. To link the array of observed relationships to grain surface exposure times, we have adapted our previous numerical model for progressive solar ion processing effects in lunar regolith grains to the Itokawa samples. The model uses SRIM ion collision damage and implantation calculations within a framework of a constant-deposited-energy model for amorphization. Inputs include experimentally-measured amorphization fluences, a Pi steradian variable ion incidence geometry required for a rotating asteroid, and a numerical flux-versus-velocity solar wind spectrum.

  18. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  19. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  20. Influences of sampling effort on detected patterns and structuring processes of a Neotropical plant-hummingbird network.

    Science.gov (United States)

    Vizentin-Bugoni, Jeferson; Maruyama, Pietro K; Debastiani, Vanderlei J; Duarte, L da S; Dalsgaard, Bo; Sazima, Marlies

    2016-01-01

    Virtually all empirical ecological interaction networks to some extent suffer from undersampling. However, how limitations imposed by sampling incompleteness affect our understanding of ecological networks is still poorly explored, which may hinder further advances in the field. Here, we use a plant-hummingbird network with unprecedented sampling effort (2716 h of focal observations) from the Atlantic Rainforest in Brazil, to investigate how sampling effort affects the description of network structure (i.e. widely used network metrics) and the relative importance of distinct processes (i.e. species abundances vs. traits) in determining the frequency of pairwise interactions. By dividing the network into time slices representing a gradient of sampling effort, we show that quantitative metrics, such as interaction evenness, specialization (H2 '), weighted nestedness (wNODF) and modularity (Q; QuanBiMo algorithm) were less biased by sampling incompleteness than binary metrics. Furthermore, the significance of some network metrics changed along the sampling effort gradient. Nevertheless, the higher importance of traits in structuring the network was apparent even with small sampling effort. Our results (i) warn against using very poorly sampled networks as this may bias our understanding of networks, both their patterns and structuring processes, (ii) encourage the use of quantitative metrics little influenced by sampling when performing spatio-temporal comparisons and (iii) indicate that in networks strongly constrained by species traits, such as plant-hummingbird networks, even small sampling is sufficient to detect their relative importance for the frequencies of interactions. Finally, we argue that similar effects of sampling are expected for other highly specialized subnetworks. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.

  1. High throughput sample processing and automated scoring

    Directory of Open Access Journals (Sweden)

    Gunnar eBrunborg

    2014-10-01

    Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.

  2. A Process For Performance Evaluation Of Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Andrew J. Kornecki

    2003-12-01

    Full Text Available Real-time developers and engineers must not only meet the system functional requirements, but also the stringent timing requirements. One of the critical decisions leading to meeting these timing requirements is the selection of an operating system under which the software will be developed and run. Although there is ample documentation on real-time systems performance and evaluation, little can be found that combines such information into an efficient process for use by developers. As the software industry moves towards clearly defined processes, creation of appropriate guidelines describing a process for performance evaluation of real-time system would greatly benefit real-time developers. This technology transition research focuses on developing such a process. PROPERT (PROcess for Performance Evaluation of Real Time systems - the process described in this paper - is based upon established techniques for evaluating real-time systems. It organizes already existing real-time performance criteria and assessment techniques in a manner consistent with a well-formed process, based on the Personal Software Process concepts.

  3. Data Set for the manuscript entitled, "Sample Processing Approach for Detection of Ricin in Surface Samples."

    Data.gov (United States)

    U.S. Environmental Protection Agency — Figure. This dataset is associated with the following publication: Shah, S., S. Kane, A.M. Erler, and T. Alfaro. Sample Processing Approach for Detection of Ricin in...

  4. Signal processing of data from short sample tests for the projection of conductor performance in ITER magnets

    International Nuclear Information System (INIS)

    Martovetsky, Nicolai

    2008-01-01

    Qualification of the ITER conductor is absolutely necessary. Testing large scale conductors is expensive and time-consuming. To test 3-4 m long straight samples in a bore of a split solenoid is a relatively economical way in comparison with the fabrication of a coil to be tested in a bore of a background field solenoid. However, testing short samples may give ambiguous results due to different constraints in current redistribution in the cable or other end effects which are not present in the large magnet. This paper discusses the processes taking place in the ITER conductor, conditions when conductor performance could be distorted and possible signal processing to deduce the behaviour of ITER conductors in ITER magnets from the test data

  5. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  6. Time delay of quantum scattering processes

    International Nuclear Information System (INIS)

    Martin, P.A.

    1981-01-01

    The author presents various aspects of the theory of the time delay of scattering processes. The author mainly studies non-relativistic two-body scattering processes, first summarizing briefly the theory of simple scattering systems. (Auth.)

  7. Off-line real-time FTIR analysis of a process step in imipenem production

    Science.gov (United States)

    Boaz, Jhansi R.; Thomas, Scott M.; Meyerhoffer, Steven M.; Staskiewicz, Steven J.; Lynch, Joseph E.; Egan, Richard S.; Ellison, Dean K.

    1992-08-01

    We have developed an FT-IR method, using a Spectra-Tech Monit-IR 400 systems, to monitor off-line the completion of a reaction in real-time. The reaction is moisture-sensitive and analysis by more conventional methods (normal-phase HPLC) is difficult to reproduce. The FT-IR method is based on the shift of a diazo band when a conjugated beta-diketone is transformed into a silyl enol ether during the reaction. The reaction mixture is examined directly by IR and does not require sample workup. Data acquisition time is less than one minute. The method has been validated for specificity, precision and accuracy. The results obtained by the FT-IR method for known mixtures and in-process samples compare favorably with those from a normal-phase HPLC method.

  8. The psychophysiology of real-time financial risk processing.

    Science.gov (United States)

    Lo, Andrew W; Repin, Dmitry V

    2002-04-01

    A longstanding controversy in economics and finance is whether financial markets are governed by rational forces or by emotional responses. We study the importance of emotion in the decision-making process of professional securities traders by measuring their physiological characteristics (e.g., skin conductance, blood volume pulse, etc.) during live trading sessions while simultaneously capturing real-time prices from which market events can be detected. In a sample of 10 traders, we find statistically significant differences in mean electrodermal responses during transient market events relative to no-event control periods, and statistically significant mean changes in cardiovascular variables during periods of heightened market volatility relative to normal-volatility control periods. We also observe significant differences in these physiological responses across the 10 traders that may be systematically related to the traders' levels of experience.

  9. Multi-time-over-threshold technique for photomultiplier signal processing: Description and characterization of the SCOTT ASIC

    International Nuclear Information System (INIS)

    Ferry, S.; Guilloux, F.; Anvar, S.; Chateau, F.; Delagnes, E.; Gautard, V.; Louis, F.; Monmarthe, E.; Le Provost, H.; Russo, S.; Schuller, J.-P.; Stolarczyk, Th.; Vallage, B.; Zonca, E.

    2012-01-01

    KM3NeT aims to build a cubic-kilometer scale neutrino telescope in the Mediterranean Sea based on a 3D array of photomultiplier tubes. A dedicated ASIC, named SCOTT, has been developed for the readout electronics of the PMTs: it uses up to 16 adjustable thresholds to digitize the signals with the multi-time-over-threshold technique. Digital outputs of discriminators feed a circular sampling memory and a “first in first out” digital memory for derandomization. At the end of the data processing, the ASIC produces a digital waveform sampled at 800 MHz. A specific study was carried out to process PMT data and has showed that five specifically chosen thresholds are suited to reach the required timing precision. A dedicated method based on the duration of the signal over a given threshold allows an equivalent timing precision at any charge. A charge estimator using the information from the thresholds allows a charge determination within less than 20% up to 60 pe.

  10. Multi-time-over-threshold technique for photomultiplier signal processing: Description and characterization of the SCOTT ASIC

    Science.gov (United States)

    Ferry, S.; Guilloux, F.; Anvar, S.; Chateau, F.; Delagnes, E.; Gautard, V.; Louis, F.; Monmarthe, E.; Le Provost, H.; Russo, S.; Schuller, J.-P.; Stolarczyk, Th.; Vallage, B.; Zonca, E.; Representing the KM3NeT Consortium

    2012-12-01

    KM3NeT aims to build a cubic-kilometer scale neutrino telescope in the Mediterranean Sea based on a 3D array of photomultiplier tubes. A dedicated ASIC, named SCOTT, has been developed for the readout electronics of the PMTs: it uses up to 16 adjustable thresholds to digitize the signals with the multi-time-over-threshold technique. Digital outputs of discriminators feed a circular sampling memory and a “first in first out” digital memory for derandomization. At the end of the data processing, the ASIC produces a digital waveform sampled at 800 MHz. A specific study was carried out to process PMT data and has showed that five specifically chosen thresholds are suited to reach the required timing precision. A dedicated method based on the duration of the signal over a given threshold allows an equivalent timing precision at any charge. A charge estimator using the information from the thresholds allows a charge determination within less than 20% up to 60 pe.

  11. Real-Time Processing Library for Open-Source Hardware Biomedical Sensors.

    Science.gov (United States)

    Molina-Cantero, Alberto J; Castro-García, Juan A; Lebrato-Vázquez, Clara; Gómez-González, Isabel M; Merino-Monge, Manuel

    2018-03-29

    Applications involving data acquisition from sensors need samples at a preset frequency rate, the filtering out of noise and/or analysis of certain frequency components. We propose a novel software architecture based on open-software hardware platforms which allows programmers to create data streams from input channels and easily implement filters and frequency analysis objects. The performances of the different classes given in the size of memory allocated and execution time (number of clock cycles) were analyzed in the low-cost platform Arduino Genuino. In addition, 11 people took part in an experiment in which they had to implement several exercises and complete a usability test. Sampling rates under 250 Hz (typical for many biomedical applications) makes it feasible to implement filters, sliding windows and Fourier analysis, operating in real time. Participants rated software usability at 70.2 out of 100 and the ease of use when implementing several signal processing applications was rated at just over 4.4 out of 5. Participants showed their intention of using this software because it was percieved as useful and very easy to use. The performances of the library showed that it may be appropriate for implementing small biomedical real-time applications or for human movement monitoring, even in a simple open-source hardware device like Arduino Genuino. The general perception about this library is that it is easy to use and intuitive.

  12. MODIS Time Series to Detect Anthropogenic Interventions and Degradation Processes in Tropical Pasture

    Directory of Open Access Journals (Sweden)

    Daniel Alves Aguiar

    2017-01-01

    Full Text Available The unavoidable diet change in emerging countries, projected for the coming years, will significantly increase the global consumption of animal protein. It is expected that Brazilian livestock production, responsible for close to 15% of global production, be prepared to answer to the increasing demand of beef. Consequently, the evaluation of pasture quality at regional scale is important to inform public policies towards a rational land use strategy directed to improve livestock productivity in the country. Our hypothesis is that MODIS images can be used to evaluate the processes of degradation, restoration and renovation of tropical pastures. To test this hypothesis, two field campaigns were performed covering a route of approximately 40,000 km through nine Brazilian states. To characterize the sampled pastures, biophysical parameters were measured and observations about the pastures, the adopted management and the landscape were collected. Each sampled pasture was evaluated using a time series of MODIS EVI2 images from 2000–2012, according to a new protocol based on seven phenological metrics, 14 Boolean criteria and two numerical criteria. The theoretical basis of this protocol was derived from interviews with producers and livestock experts during a third field campaign. The analysis of the MODIS EVI2 time series provided valuable historical information on the type of intervention and on the biological degradation process of the sampled pastures. Of the 782 pastures sampled, 26.6% experienced some type of intervention, 19.1% were under biological degradation, and 54.3% presented neither intervention nor trend of biomass decrease during the period analyzed.

  13. Process evaluation of treatment times in a large radiotherapy department

    International Nuclear Information System (INIS)

    Beech, R.; Burgess, K.; Stratford, J.

    2016-01-01

    Purpose/objective: The Department of Health (DH) recognises access to appropriate and timely radiotherapy (RT) services as crucial in improving cancer patient outcomes, especially when facing a predicted increase in cancer diagnosis. There is a lack of ‘real-time’ data regarding daily demand of a linear accelerator, the impact of increasingly complex techniques on treatment times, and whether current scheduling reflects time needed for RT delivery, which would be valuable in highlighting current RT provision. Material/methods: A systematic quantitative process evaluation was undertaken in a large regional cancer centre, including a satellite centre, between January and April 2014. Data collected included treatment room-occupancy time, RT site, RT and verification technique and patient mobility status. Data was analysed descriptively; average room-occupancy times were calculated for RT techniques and compared to historical standardised treatment times within the department. Results: Room-occupancy was recorded for over 1300 fractions, over 50% of which overran their allotted treatment time. In a focused sample of 16 common techniques, 10 overran their allocated timeslots. Verification increased room-occupancy by six minutes (50%) over non-imaging. Treatments for patients requiring mobility assistance took four minutes (29%) longer. Conclusion: The majority of treatments overran their standardised timeslots. Although technique advancement has reduced RT delivery time, room-occupancy has not necessarily decreased. Verification increases room-occupancy and needs to be considered when moving towards adaptive techniques. Mobility affects room-occupancy and will become increasingly significant in an ageing population. This evaluation assesses validity of current treatment times in this department, and can be modified and repeated as necessary. - Highlights: • A process evaluation examined room-occupancy for various radiotherapy techniques. • Appointment lengths

  14. Flow time prediction for a single-server order picking workstation using aggregate process times

    NARCIS (Netherlands)

    Andriansyah, R.; Etman, L.F.P.; Rooda, J.E.

    2010-01-01

    In this paper we propose a simulation modeling approach based on aggregate process times for the performance analysis of order picking workstations in automated warehouses. The aggregate process time distribution is calculated from tote arrival and departure times. We refer to the aggregate process

  15. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    Science.gov (United States)

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  16. Highly oriented Bi-system bulk sample prepared by a decomposition-crystallization process

    International Nuclear Information System (INIS)

    Xi Zhengping; Zhou Lian; Ji Chunlin

    1992-01-01

    A decomposition-crystallization method, preparing highly oriented Bi-system bulk sample is reported. The effects of processing parameter, decomposition temperature, cooling rate and post-treatment condition on texture and superconductivity are investigated. The method has successfully prepared highly textured Bi-system bulk samples. High temperature annealing does not destroy the growing texture, but the cooling rate has some effect on texture and superconductivity. Annealing in N 2 /O 2 atmosphere can improve superconductivity of the textured sample. The study on the superconductivity of the Bi(Pb)-Sr-Ca-Cu-O bulk material has been reported in numerous papers. The research on J c concentrates on the tape containing the 2223 phase, with very few studies on the J c of bulk sample. The reason for the lack of studies is that the change of superconducting phases at high temperatures has not been known. The authors have reported that the 2212 phase incongruently melted at about 875 degrees C and proceeded to orient the c-axis perpendicular to the surface in the process of crystallization of the 2212 phase. Based on that result, a decomposition-crystallization method was proposed to prepare highly oriented Bi-system bulk sample. In this paper, the process is described in detail and the effects of processing parameters on texture and superconductivity are reported

  17. Soil Gas Sample Handling: Evaluation of Water Removal and Sample Ganging

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Brad G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Abrecht, David G. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hayes, James C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mendoza, Donaldo P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    Soil gas sampling is currently conducted in support of Nuclear Test Ban treaty verification. Soil gas samples are collected and analyzed for isotopes of interest. Some issues that can impact sampling and analysis of these samples are excess moisture and sample processing time. Here we discuss three potential improvements to the current sampling protocol; a desiccant for water removal, use of molecular sieve to remove CO2 from the sample during collection, and a ganging manifold to allow composite analysis of multiple samples.

  18. The real-time fitting of radioactive decay curves. Pt. 3. Counting during sampling

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1994-01-01

    An analysis of a least-squares method for the real-time fitting of the theoretical total count function to the actual total count from radioactive decays has been given previously for the case where counting takes place after a sample is taken. The counting may be done in a number of different counting systems which distinguish between different types or energies of radiation emitted from the sample. The method would allow real-time determination of the numbers of atoms and hence activities of the individual isotopes present and has been designated the Time Evolved Least-Squares method (TELS). If the radioactivity which is to be measured exists as an aerosol or in a form where a sample is taken at a constant rate it may be possible to count during sampling and by so doing reduce the total time required to determine the activity of the individual isotopes present. The TELS method is extended here to the case where counting and the evaluation of the activity takes place concurrently with the sampling. The functions which need to be evaluated are derived and the calculations required to implement the method are discussed. As with the TELS method of counting after sampling the technique of counting during sampling and the simultaneous evaluation of activity could be achieved in real-time. Results of testing the method by computer simulation for two counting schemes for the descendants of radon are presented. ((orig.))

  19. Data processing system for real-time control

    International Nuclear Information System (INIS)

    Oasa, K.; Mochizuki, O.; Toyokawa, R.; Yahiro, K.

    1983-01-01

    Real-time control, for large Tokamak JT-60, requires various data processings between diagnostic devices to control system. These processings require to high speed performance so that it aims at giving information necessary for feedback control during discharges. Then, the architecture of this system has hierachical structure of processors. These processors are connected each other by the CAMAC modules and the optical communication network, which is the 5 M bytes/second CAMAC serial highway. This system has two kinds of intelligences for this purpose. One is ACM-PU pairs in some torus hall crates which has a microcomputerized auxiliary controller and a preprocessing unit. Other is real-time processor which has a minicomputer and preprocessing unit. Most of the real-time processing, for example Abel inversion are characteristic to the diagnostic devices. Such a processing is carried out by an ACM-PU pair in the crate dedicated to the diagnostic device. Some processings, however, are also necessary which compute secondary parameters as functions of primary parameters. A typical example is Zeff, which is a function of Te, Ne and bremsstrahluny intensity. The real-time processor is equipped for such secondary processings and transfer the results. Preprocessing unit -PU- attached to ACM and real-time processor contains a signal processor, which executes in parallel such function as move, add and multiply during one micro-instruction cycle of 200 nsec. According to the progress of the experiment, more high speed processing are required, so the authors developed the PU-X module that contains multi signal processors. After a shot, inter-shot-processor which consists of general-purpose computers, gathers data into the database, then analyze them, and improve these processes to more effective

  20. Discrete-Time Mixing Receiver Architecture for RF-Sampling Software-Defined Radio

    NARCIS (Netherlands)

    Ru, Z.; Klumperink, Eric A.M.; Nauta, Bram

    2010-01-01

    Abstract—A discrete-time (DT) mixing architecture for RF-sampling receivers is presented. This architecture makes RF sampling more suitable for software-defined radio (SDR) as it achieves wideband quadrature demodulation and wideband harmonic rejection. The paper consists of two parts. In the first

  1. Advanced radar-interpretation of InSAR time series for mapping and characterization of geological processes

    OpenAIRE

    Cigna, F.; Del Ventisette, C.; Liguori, V.; Casagli, N.

    2011-01-01

    We present a new post-processing methodology for the analysis of InSAR (Synthetic Aperture Radar Interferometry) multi-temporal measures, based on the temporal under-sampling of displacement time series, the identification of potential changes occurring during the monitoring period and, eventually, the classification of different deformation behaviours. The potentials of this approach for the analysis of geological processes were tested on the case study of Naro (Italy), specifically selected...

  2. Relationships between processing delay and microbial load of broiler neck skin samples.

    Science.gov (United States)

    Lucianez, A; Holmes, M A; Tucker, A W

    2010-01-01

    The measurable microbial load on poultry carcasses during processing is determined by a number of factors including farm or origin, processing hygiene, and external temperature. This study investigated associations between carcass microbial load and progressive delays to processing. A total of 30 carcasses were delayed immediately after defeathering and before evisceration in a commercial abattoir in groups of five, and were held at ambient temperature for 1, 2, 3, 4, 6, and 8 h. Delayed carcasses were reintroduced to the processing line, and quantitative assessment of total viable count, coliforms, Staphylococcus aureus, and Pseudomonas spp. was undertaken on neck skin flap samples collected after carcass chilling and then pooled for each group. Sampling was repeated on 5 separate days, and the data were combined. Significant increases in total viable count (P = 0.001) and coliforms (P = 0.004), but not for S. aureus or Pseudomonas loads, were observed across the 8-h period of delay. In line with previous studies, there was significant variation in microbiological data according to sampling day. In conclusion, there is a significant and measurable decline in microbiological status of uneviscerated but defeathered poultry carcasses after an 8-h delay, but the variability of sampling results, reflecting the wide range of factors that impact microbial load, means that it is not possible to determine maximum or minimum acceptable periods of processing delay based on this criterion alone.

  3. Acoustic Sample Deposition MALDI-MS (ASD-MALDI-MS): A Novel Process Flow for Quality Control Screening of Compound Libraries.

    Science.gov (United States)

    Chin, Jefferson; Wood, Elizabeth; Peters, Grace S; Drexler, Dieter M

    2016-02-01

    In the early stages of drug discovery, high-throughput screening (HTS) of compound libraries against pharmaceutical targets is a common method to identify potential lead molecules. For these HTS campaigns to be efficient and successful, continuous quality control of the compound collection is necessary and crucial. However, the large number of compound samples and the limited sample amount pose unique challenges. Presented here is a proof-of-concept study for a novel process flow for the quality control screening of small-molecule compound libraries that consumes only minimal amounts of samples and affords compound-specific molecular data. This process employs an acoustic sample deposition (ASD) technique for the offline sample preparation by depositing nanoliter volumes in an array format onto microscope glass slides followed by matrix-assisted laser desorption/ionization mass spectrometric (MALDI-MS) analysis. An initial study of a 384-compound array employing the ASD-MALDI-MS workflow resulted in a 75% first-pass positive identification rate with an analysis time of <1 s per sample. © 2015 Society for Laboratory Automation and Screening.

  4. Time processing in dyscalculia.

    Science.gov (United States)

    Cappelletti, Marinella; Freeman, Elliot D; Butterworth, Brian L

    2011-01-01

    To test whether atypical number development may affect other types of quantity processing, we investigated temporal discrimination in adults with developmental dyscalculia (DD). This also allowed us to test whether number and time may be sub-served by a common quantity system or decision mechanisms: if they do, both should be impaired in dyscalculia, but if number and time are distinct they should dissociate. Participants judged which of two successively presented horizontal lines was longer in duration, the first line being preceded by either a small or a large number prime ("1" or "9") or by a neutral symbol ("#"), or in a third task participants decided which of two Arabic numbers (either "1," "5," "9") lasted longer. Results showed that (i) DD's temporal discriminability was normal as long as numbers were not part of the experimental design, even as task-irrelevant stimuli; however (ii) task-irrelevant numbers dramatically disrupted DD's temporal discriminability the more their salience increased, though the actual magnitude of the numbers had no effect; in contrast (iii) controls' time perception was robust to the presence of numbers but modulated by numerical quantity: therefore small number primes or numerical stimuli seemed to make durations appear shorter than veridical, but longer for larger numerical prime or numerical stimuli. This study is the first to show spared temporal discrimination - a dimension of continuous quantity - in a population with a congenital number impairment. Our data reinforce the idea of a partially shared quantity system across numerical and temporal dimensions, which supports both dissociations and interactions among dimensions; however, they suggest that impaired number in DD is unlikely to originate from systems initially dedicated to continuous quantity processing like time.

  5. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  6. Determination of the Isotope Ratio for Metal Samples Using a Laser Ablation/Ionization Time-of-flight Mass Spectrometry

    International Nuclear Information System (INIS)

    Song, Kyu Seok; Cha, Hyung Ki; Kim, Duk Hyeon; Min, Ki Hyun

    2004-01-01

    The laser ablation/ionization time-of-flight mass spectrometry is applied to the isotopic analysis of solid samples using a home-made instrument. The technique is convenient for solid sample analysis due to the onestep process of vaporization and ionization of the samples. The analyzed samples were lead, cadmium, molybdenum, and ytterbium. To optimize the analytical conditions of the technique, several parameters, such as laser energy, laser wavelength, size of the laser beam on the samples surface, and high voltages applied on the ion source electrodes were varied. Low energy of laser light was necessary to obtain the optimal mass resolution of spectra. The 532 nm light generated mass spectra with the higher signal-to-noise ratio compared with the 355 nm light. The best mass resolution obtained in the present study is ∼1,500 for the ytterbium

  7. 12 CFR 404.5 - Time for processing.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Time for processing. 404.5 Section 404.5 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES INFORMATION DISCLOSURE Procedures for Disclosure of Records Under the Freedom of Information Act. § 404.5 Time for processing. (a) General. Ex-Im Bank...

  8. Role of aging time on the magnetic properties of Sm2Co17 permanent magnets processed through cold isostatic pressing

    Science.gov (United States)

    Ramudu, M.; Rajkumar, D. M.

    2018-04-01

    The effect of aging time on the magnetic properties of Sm2Co17 permanent magnets processed through a novel method of cold isostatic pressing was investigated. Sintered Sm2Co17 samples were subjected to different aging times in the range of 10-30 h and their respective microstructures were correlated with the magnetic properties obtained. The values of remanant magnetization (Br) were observed to be constant in samples aged from 10-20 h beyond which a gradual decrease in Br values was observed. The values of coercivity (Hc) displayed a sharp increase in samples aged from 10 to 20 h beyond which the coercivity values showed marginal improvement. Hence a good combination of magnetic properties could be achieved in samples aged for 20 h. A maximum energy product of 27 MGOe was achieved in the 20 h aged sample processed through a novel route.

  9. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Science.gov (United States)

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  10. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    Directory of Open Access Journals (Sweden)

    Meznah Almutairy

    Full Text Available Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  11. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    Science.gov (United States)

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  12. Expectation propagation for continuous time stochastic processes

    International Nuclear Information System (INIS)

    Cseke, Botond; Schnoerr, David; Sanguinetti, Guido; Opper, Manfred

    2016-01-01

    We consider the inverse problem of reconstructing the posterior measure over the trajectories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference, giving rise to an expectation propagation type algorithm. For non-linear diffusion processes, this is achieved by leveraging moment closure approximations. We then show how the approximation can be extended to a wide class of discrete-state Markov jump processes by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems. (paper)

  13. Real-time measurement and control at JET signal processing and physics analysis for diagnostics

    International Nuclear Information System (INIS)

    Felton, R.; Joffrin, E.; Murari, A.

    2005-01-01

    To meet the requirements of the scientific programme, the EFDA JET real-time measurement and control project has developed an integrated set of real-time plasma measurements, experiment control and communication facilities. Traditional experiments collected instrument data during the plasma pulse and calculated physics data after the pulse. The challenge for continuous tokamak operation is to calculate the physics data in real-time, keeping up with the evolution of the plasma. In JET, many plasma diagnostics have been augmented with extra data acquisition and signal-processing systems so that they can both capture instrument data for conventional post-pulse analysis and calculate calibrated, validated physics results in real-time. During the pulse, the systems send sampled data sets into a network, which distributes the data to several destinations. The receiving systems may do further analysis, integrating data from several measurements, or may control the plasma scenario by heating or fuelling. The simplest real-time diagnostic systems apply scale factors to the signals, as with the electron cyclotron emission (ECE) diagnostic's 96 tuned radiometer channels, giving the electron temperature profile. In various spectroscopy diagnostics, spectral features are least-squares-fitted to measure spectra from several lines of sight, within 50 ms. Ion temperatures and rotation speed can be calculated from the line widths and shifts. For diagnostics using modulation techniques, the systems implement digital-signal processing phase trackers, lock-in amplifiers and filters, e.g., the far infrared (FIR) interferometer samples 15 channels at 400 kHz for 30 s, i.e., six million samples per second. Diagnostics have specific lines of sight, spatial channels, and various sampling rates. The heating/fuelling systems have relatively coarse spatial localisation. Analysis systems have been developed to integrate the basic physics data into smaller sets of controllable parameters on a

  14. Electro-optic sampling for time resolving relativistic ultrafast electron diffraction

    International Nuclear Information System (INIS)

    Scoby, C. M.; Musumeci, P.; Moody, J.; Gutierrez, M.; Tran, T.

    2009-01-01

    The Pegasus laboratory at UCLA features a state-of-the-art electron photoinjector capable of producing ultrashort (<100 fs) high-brightness electron bunches at energies of 3.75 MeV. These beams recently have been used to produce static diffraction patterns from scattering off thin metal foils, and it is foreseen to take advantage of the ultrashort nature of these bunches in future pump-probe time-resolved diffraction studies. In this paper, single shot 2-d electro-optic sampling is presented as a potential technique for time of arrival stamping of electron bunches used for diffraction. Effects of relatively low bunch charge (a few 10's of pC) and modestly relativistic beams are discussed and background compensation techniques to obtain high signal-to-noise ratio are explored. From these preliminary tests, electro-optic sampling is suitable to be a reliable nondestructive time stamping method for relativistic ultrafast electron diffraction at the Pegasus lab.

  15. Down sampled signal processing for a B Factory bunch-by-bunch feedback system

    International Nuclear Information System (INIS)

    Hindi, H.; Hosseini, W.; Briggs, D.; Fox, J.; Hutton, A.

    1992-03-01

    A bunch-by-bunch feedback scheme is studied for damping coupled bunch synchrotron oscillations in the proposed PEP II B Factory. The quasi-linear feedback systems design incorporates a phase detector to provide a quantized measure of bunch phase, digital signal processing to compute an error correction signal and a kicker system to correct the energy of the bunches. A farm of digital processors, operating in parallel, is proposed to compute correction signals for the 1658 bunches of the B Factory. This paper studies the use of down sampled processing to reduce the computational complexity of the feedback system. We present simulation results showing the effect of down sampling on beam dynamics. Results show that down sampled processing can reduce the scale of the processing task by a factor of 10

  16. Advanced Map For Real-Time Process Control

    Science.gov (United States)

    Shiobara, Yasuhisa; Matsudaira, Takayuki; Sashida, Yoshio; Chikuma, Makoto

    1987-10-01

    MAP, a communications protocol for factory automation proposed by General Motors [1], has been accepted by users throughout the world and is rapidly becoming a user standard. In fact, it is now a LAN standard for factory automation. MAP is intended to interconnect different devices, such as computers and programmable devices, made by different manufacturers, enabling them to exchange information. It is based on the OSI intercomputer com-munications protocol standard under development by the ISO. With progress and standardization, MAP is being investigated for application to process control fields other than factory automation [2]. The transmission response time of the network system and centralized management of data exchanged with various devices for distributed control are import-ant in the case of a real-time process control with programmable controllers, computers, and instruments connected to a LAN system. MAP/EPA and MINI MAP aim at reduced overhead in protocol processing and enhanced transmission response. If applied to real-time process control, a protocol based on point-to-point and request-response transactions limits throughput and transmission response. This paper describes an advanced MAP LAN system applied to real-time process control by adding a new data transmission control that performs multicasting communication voluntarily and periodically in the priority order of data to be exchanged.

  17. On the record process of time-reversible spectrally-negative Markov additive processes

    NARCIS (Netherlands)

    J. Ivanovs; M.R.H. Mandjes (Michel)

    2009-01-01

    htmlabstractWe study the record process of a spectrally-negative Markov additive process (MAP). Assuming time-reversibility, a number of key quantities can be given explicitly. It is shown how these key quantities can be used when analyzing the distribution of the all-time maximum attained by MAPs

  18. Design and relevant sample calculations for a neutral particle energy diagnostic based on time of flight

    Energy Technology Data Exchange (ETDEWEB)

    Cecconello, M

    1999-05-01

    Extrap T2 will be equipped with a neutral particles energy diagnostic based on time of flight technique. In this report, the expected neutral fluxes for Extrap T2 are estimated and discussed in order to determine the feasibility and the limits of such diagnostic. These estimates are based on a 1D model of the plasma. The input parameters of such model are the density and temperature radial profiles of electrons and ions and the density of neutrals at the edge and in the centre of the plasma. The atomic processes included in the model are the charge-exchange and the electron-impact ionization processes. The results indicate that the plasma attenuation length varies from a/5 to a, a being the minor radius. Differential neutral fluxes, as well as the estimated power losses due to CX processes (2 % of the input power), are in agreement with experimental results obtained in similar devices. The expected impurity influxes vary from 10{sup 14} to 10{sup 11} cm{sup -2}s{sup -1}. The neutral particles detection and acquisition systems are discussed. The maximum detectable energy varies from 1 to 3 keV depending on the flight distances d. The time resolution is 0.5 ms. Output signals from the waveform recorder are foreseen in the range 0-200 mV. An 8-bit waveform recorder having 2 MHz sampling frequency and 100K sample of memory capacity is the minimum requirement for the acquisition system 20 refs, 19 figs.

  19. Gaussian process based intelligent sampling for measuring nano-structure surfaces

    Science.gov (United States)

    Sun, L. J.; Ren, M. J.; Yin, Y. H.

    2016-09-01

    Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent sampling method to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

  20. Effect of Sample Storage Temperature and Time Delay on Blood Gases, Bicarbonate and pH in Human Arterial Blood Samples.

    Science.gov (United States)

    Mohammadhoseini, Elham; Safavi, Enayat; Seifi, Sepideh; Seifirad, Soroush; Firoozbakhsh, Shahram; Peiman, Soheil

    2015-03-01

    Results of arterial blood gas analysis can be biased by pre-analytical factors, such as time interval before analysis, temperature during storage and syringe type. To investigate the effects of samples storage temperature and time delay on blood gases, bicarbonate and PH results in human arterial blood samples. 2.5 mL arterial blood samples were drawn from 45 patients via an indwelling Intraarterial catheter. Each sample was divided into five equal samples and stored in multipurpose tuberculin plastic syringes. Blood gas analysis was performed on one of five samples as soon as possible. Four other samples were divided into two groups stored at 22°C and 0°C. Blood gas analyses were repeated at 30 and 60 minutes after sampling. PaO2 of the samples stored at 0°C was increased significantly after 60 minutes (P = 0.007). The PaCO2 of the samples kept for 30 and 60 minutes at 22°C was significantly higher than primary result (P = 0.04, P samples stored at 22°C, pH decreased significantly after 30 and 60 minutes (P = 0.017, P = 0.001). There were no significant differences in other results of samples stored at 0°C or 22°C after 30 or 60 minutes. In samples stored in plastic syringes, overestimation of PaO2 levels should be noted if samples cooled before analysis. In samples stored in plastic syringes, it is not necessary to store samples in iced water when analysis delayed up to one hour.

  1. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    International Nuclear Information System (INIS)

    Albers, D.J.; Hripcsak, George

    2012-01-01

    Highlights: ► Time-delayed mutual information for irregularly sampled time-series. ► Estimation bias for the time-delayed mutual information calculation. ► Fast, simple, PDF estimator independent, time-delayed mutual information bias estimate. ► Quantification of data-set-size limits of the time-delayed mutual calculation. - Abstract: A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database.

  2. Investigation of Bicycle Travel Time Estimation Using Bluetooth Sensors for Low Sampling Rates

    Directory of Open Access Journals (Sweden)

    Zhenyu Mei

    2014-10-01

    Full Text Available Filtering the data for bicycle travel time using Bluetooth sensors is crucial to the estimation of link travel times on a corridor. The current paper describes an adaptive filtering algorithm for estimating bicycle travel times using Bluetooth data, with consideration of low sampling rates. The data for bicycle travel time using Bluetooth sensors has two characteristics. First, the bicycle flow contains stable and unstable conditions. Second, the collected data have low sampling rates (less than 1%. To avoid erroneous inference, filters are introduced to “purify” multiple time series. The valid data are identified within a dynamically varying validity window with the use of a robust data-filtering procedure. The size of the validity window varies based on the number of preceding sampling intervals without a Bluetooth record. Applications of the proposed algorithm to the dataset from Genshan East Road and Moganshan Road in Hangzhou demonstrate its ability to track typical variations in bicycle travel time efficiently, while suppressing high frequency noise signals.

  3. Real time loss detection for SNM in process

    International Nuclear Information System (INIS)

    Candy, J.V.; Dunn, D.R.; Gavel, D.T.

    1980-01-01

    This paper discusses the basis of a design for real time special nuclear material (SNM) loss detectors. The design utilizes process measurements and signal processing techniques to produce a timely estimate of material loss. A state estimator is employed as the primary signal processing algorithm. Material loss is indicated by changes in the states or process innovations (residuals). The design philosophy is discussed in the context of these changes

  4. Learning Bounds of ERM Principle for Sequences of Time-Dependent Samples

    Directory of Open Access Journals (Sweden)

    Mingchen Yao

    2015-01-01

    Full Text Available Many generalization results in learning theory are established under the assumption that samples are independent and identically distributed (i.i.d.. However, numerous learning tasks in practical applications involve the time-dependent data. In this paper, we propose a theoretical framework to analyze the generalization performance of the empirical risk minimization (ERM principle for sequences of time-dependent samples (TDS. In particular, we first present the generalization bound of ERM principle for TDS. By introducing some auxiliary quantities, we also give a further analysis of the generalization properties and the asymptotical behaviors of ERM principle for TDS.

  5. Automated processing of whole blood samples for the determination of immunosuppressants by liquid chromatography tandem-mass spectrometry.

    Science.gov (United States)

    Vogeser, Michael; Spöhrer, Ute

    2006-01-01

    Liquid chromatography tandem-mass spectrometry (LC-MS/MS) is an efficient technology for routine determination of immunosuppressants in whole blood; however, time-consuming manual sample preparation remains a significant limitation of this technique. Using a commercially available robotic pipetting system (Tecan Freedom EVO), we developed an automated sample-preparation protocol for quantification of tacrolimus in whole blood by LC-MS/MS. Barcode reading, sample resuspension, transfer of whole blood aliquots into a deep-well plate, addition of internal standard solution, mixing, and protein precipitation by addition of an organic solvent is performed by the robotic system. After centrifugation of the plate, the deproteinized supernatants are submitted to on-line solid phase extraction, using column switching prior to LC-MS/MS analysis. The only manual actions within the entire process are decapping of the tubes, and transfer of the deep-well plate from the robotic system to a centrifuge and finally to the HPLC autosampler. Whole blood pools were used to assess the reproducibility of the entire analytical system for measuring tacrolimus concentrations. A total coefficient of variation of 1.7% was found for the entire automated analytical process (n=40; mean tacrolimus concentration, 5.3 microg/L). Close agreement between tacrolimus results obtained after manual and automated sample preparation was observed. The analytical system described here, comprising automated protein precipitation, on-line solid phase extraction and LC-MS/MS analysis, is convenient and precise, and minimizes hands-on time and the risk of mistakes in the quantification of whole blood immunosuppressant concentrations compared to conventional methods.

  6. Sample registration software for process automation in the Neutron Activation Analysis (NAA) Facility in Malaysia nuclear agency

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Nur Aira Abd, E-mail: nur-aira@nuclearmalaysia.gov.my; Yussup, Nolida; Ibrahim, Maslina Bt. Mohd; Mokhtar, Mukhlis B.; Soh Shaari, Syirrazie Bin Che; Azman, Azraf B. [Technical Support Division, Malaysian Nuclear Agency, 43000, Kajang, Selangor (Malaysia); Salim, Nazaratul Ashifa Bt. Abdullah [Division of Waste and Environmental Technology, Malaysian Nuclear Agency, 43000, Kajang, Selangor (Malaysia); Ismail, Nadiah Binti [Fakulti Kejuruteraan Elektrik, UiTM Pulau Pinang, 13500 Permatang Pauh, Pulau Pinang (Malaysia)

    2015-04-29

    Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on ‘Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)’. The objective of the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.

  7. Sample registration software for process automation in the Neutron Activation Analysis (NAA) Facility in Malaysia nuclear agency

    Science.gov (United States)

    Rahman, Nur Aira Abd; Yussup, Nolida; Salim, Nazaratul Ashifa Bt. Abdullah; Ibrahim, Maslina Bt. Mohd; Mokhtar, Mukhlis B.; Soh@Shaari, Syirrazie Bin Che; Azman, Azraf B.; Ismail, Nadiah Binti

    2015-04-01

    Neutron Activation Analysis (NAA) had been established in Nuclear Malaysia since 1980s. Most of the procedures established were done manually including sample registration. The samples were recorded manually in a logbook and given ID number. Then all samples, standards, SRM and blank were recorded on the irradiation vial and several forms prior to irradiation. These manual procedures carried out by the NAA laboratory personnel were time consuming and not efficient. Sample registration software is developed as part of IAEA/CRP project on `Development of Process Automation in the Neutron Activation Analysis (NAA) Facility in Malaysia Nuclear Agency (RC17399)'. The objective of the project is to create a pc-based data entry software during sample preparation stage. This is an effective method to replace redundant manual data entries that needs to be completed by laboratory personnel. The software developed will automatically generate sample code for each sample in one batch, create printable registration forms for administration purpose, and store selected parameters that will be passed to sample analysis program. The software is developed by using National Instruments Labview 8.6.

  8. Double Shell Tank (DST) Process Waste Sampling Subsystem Definition Report

    International Nuclear Information System (INIS)

    RASMUSSEN, J.H.

    2000-01-01

    This report defines the Double-Shell Tank (DST) Process Waste Sampling Subsystem (PWSS). This subsystem definition report fully describes and identifies the system boundaries of the PWSS. This definition provides a basis for developing functional, performance, and test requirements (i.e., subsystem specification), as necessary, for the PWSS. The resultant PWSS specification will include the sampling requirements to support the transfer of waste from the DSTs to the Privatization Contractor during Phase 1 of Waste Feed Delivery

  9. First Passage Time Intervals of Gaussian Processes

    Science.gov (United States)

    Perez, Hector; Kawabata, Tsutomu; Mimaki, Tadashi

    1987-08-01

    The first passage time problem of a stationary Guassian process is theretically and experimentally studied. Renewal functions are derived for a time-dependent boundary and numerically calculated for a Gaussian process having a seventh-order Butterworth spectrum. The results show a multipeak property not only for the constant boundary but also for a linearly increasing boundary. The first passage time distribution densities were experimentally determined for a constant boundary. The renewal functions were shown to be a fairly good approximation to the distribution density over a limited range.

  10. Inferring the demographic history from DNA sequences: An importance sampling approach based on non-homogeneous processes.

    Science.gov (United States)

    Ait Kaci Azzou, S; Larribe, F; Froda, S

    2016-10-01

    In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.

  11. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    Directory of Open Access Journals (Sweden)

    Zheng Hu

    2015-01-01

    Full Text Available High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.

  12. Photonics-based real-time ultra-high-range-resolution radar with broadband signal generation and processing.

    Science.gov (United States)

    Zhang, Fangzheng; Guo, Qingshui; Pan, Shilong

    2017-10-23

    Real-time and high-resolution target detection is highly desirable in modern radar applications. Electronic techniques have encountered grave difficulties in the development of such radars, which strictly rely on a large instantaneous bandwidth. In this article, a photonics-based real-time high-range-resolution radar is proposed with optical generation and processing of broadband linear frequency modulation (LFM) signals. A broadband LFM signal is generated in the transmitter by photonic frequency quadrupling, and the received echo is de-chirped to a low frequency signal by photonic frequency mixing. The system can operate at a high frequency and a large bandwidth while enabling real-time processing by low-speed analog-to-digital conversion and digital signal processing. A conceptual radar is established. Real-time processing of an 8-GHz LFM signal is achieved with a sampling rate of 500 MSa/s. Accurate distance measurement is implemented with a maximum error of 4 mm within a range of ~3.5 meters. Detection of two targets is demonstrated with a range-resolution as high as 1.875 cm. We believe the proposed radar architecture is a reliable solution to overcome the limitations of current radar on operation bandwidth and processing speed, and it is hopefully to be used in future radars for real-time and high-resolution target detection and imaging.

  13. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  14. Demonstration of the efficiency and robustness of an acid leaching process to remove metals from various CCA-treated wood samples.

    Science.gov (United States)

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Janin, Amélie; Gastonguay, Louis

    2014-01-01

    In recent years, an efficient and economically attractive leaching process has been developed to remove metals from copper-based treated wood wastes. This study explored the applicability of this leaching process using chromated copper arsenate (CCA) treated wood samples with different initial metal loading and elapsed time between wood preservation treatment and remediation. The sulfuric acid leaching process resulted in the solubilization of more than 87% of the As, 70% of the Cr, and 76% of the Cu from CCA-chips and in the solubilization of more than 96% of the As, 78% of the Cr and 91% of the Cu from CCA-sawdust. The results showed that the performance of this leaching process might be influenced by the initial metal loading of the treated wood wastes and the elapsed time between preservation treatment and remediation. The effluents generated during the leaching steps were treated by precipitation-coagulation to satisfy the regulations for effluent discharge in municipal sewers. Precipitation using ferric chloride and sodium hydroxide was highly efficient, removing more than 99% of the As, Cr, and Cu. It appears that this leaching process can be successfully applied to remove metals from different CCA-treated wood samples and then from the effluents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Quality evaluation of processed clay soil samples | Steiner-Asiedu ...

    African Journals Online (AJOL)

    Introduction: This study assessed the microbial quality of clay samples sold on two of the major Ghanaian markets. Methods: The study was a cross-sectional assessing the evaluation of processed clay and effects it has on the nutrition of the consumers in the political capital town of Ghana. The items for the examination was ...

  16. REAL-TIME PCR DETECTION OF LISTERIA MONOCYTOGENES IN FOOD SAMPLES OF ANIMAL ORIGIN

    Directory of Open Access Journals (Sweden)

    Jaroslav Pochop

    2013-02-01

    Full Text Available The aim of this study was to follow the contamination of food with Listeria monocytogenes by using Step One real time polymerase chain reaction (PCR. We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and SensiFAST SYBR Hi-ROX Kit for the real-time PCR performance. In 24 samples of food of animal origin without incubation were detected strains of Listeria monocytogenes in 15 samples (swabs. Nine samples were negative. Our results indicated that the real-time PCR assay developed in this study could sensitively detect Listeria monocytogenes in food of animal origin without incubation. This could prevent infection caused by Listeria monocytogenes, and also could benefit food manufacturing companies by extending their product’s shelf-life as well as saving the cost of warehousing their food products while awaiting pathogen testing results. The rapid real-time PCR-based method performed very well compared to the conventional method. It is a fast, simple, specific and sensitive way to detect nucleic acids, which could be used in clinical diagnostic tests in the future.

  17. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    Science.gov (United States)

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  18. High-speed optical signal processing using time lenses

    DEFF Research Database (Denmark)

    Galili, Michael; Hu, Hao; Guan, Pengyu

    2015-01-01

    This paper will discuss time lenses and their broad range of applications. A number of recent demonstrations of complex high-speed optical signal processing using time lenses will be outlined with focus on the operating principle.......This paper will discuss time lenses and their broad range of applications. A number of recent demonstrations of complex high-speed optical signal processing using time lenses will be outlined with focus on the operating principle....

  19. DSMC multicomponent aerosol dynamics: Sampling algorithms and aerosol processes

    Science.gov (United States)

    Palaniswaamy, Geethpriya

    The post-accident nuclear reactor primary and containment environments can be characterized by high temperatures and pressures, and fission products and nuclear aerosols. These aerosols evolve via natural transport processes as well as under the influence of engineered safety features. These aerosols can be hazardous and may pose risk to the public if released into the environment. Computations of their evolution, movement and distribution involve the study of various processes such as coagulation, deposition, condensation, etc., and are influenced by factors such as particle shape, charge, radioactivity and spatial inhomogeneity. These many factors make the numerical study of nuclear aerosol evolution computationally very complicated. The focus of this research is on the use of the Direct Simulation Monte Carlo (DSMC) technique to elucidate the role of various phenomena that influence the nuclear aerosol evolution. In this research, several aerosol processes such as coagulation, deposition, condensation, and source reinforcement are explored for a multi-component, aerosol dynamics problem in a spatially homogeneous medium. Among the various sampling algorithms explored the Metropolis sampling algorithm was found to be effective and fast. Several test problems and test cases are simulated using the DSMC technique. The DSMC results obtained are verified against the analytical and sectional results for appropriate test problems. Results show that the assumption of a single mean density is not appropriate due to the complicated effect of component densities on the aerosol processes. The methods developed and the insights gained will also be helpful in future research on the challenges associated with the description of fission product and aerosol releases.

  20. A test of alternative estimators for volume at time 1 from remeasured point samples

    Science.gov (United States)

    Francis A. Roesch; Edwin J. Green; Charles T. Scott

    1993-01-01

    Two estimators for volume at time 1 for use with permanent horizontal point samples are evaluated. One estimator, used traditionally, uses only the trees sampled at time 1, while the second estimator, originally presented by Roesch and coauthors (F.A. Roesch, Jr., E.J. Green, and C.T. Scott. 1989. For. Sci. 35(2):281-293). takes advantage of additional sample...

  1. Automated CBED processing: Sample thickness estimation based on analysis of zone-axis CBED pattern

    Energy Technology Data Exchange (ETDEWEB)

    Klinger, M., E-mail: klinger@post.cz; Němec, M.; Polívka, L.; Gärtnerová, V.; Jäger, A.

    2015-03-15

    An automated processing of convergent beam electron diffraction (CBED) patterns is presented. The proposed methods are used in an automated tool for estimating the thickness of transmission electron microscopy (TEM) samples by matching an experimental zone-axis CBED pattern with a series of patterns simulated for known thicknesses. The proposed tool detects CBED disks, localizes a pattern in detected disks and unifies the coordinate system of the experimental pattern with the simulated one. The experimental pattern is then compared disk-by-disk with a series of simulated patterns each corresponding to different known thicknesses. The thickness of the most similar simulated pattern is then taken as the thickness estimate. The tool was tested on [0 1 1] Si, [0 1 0] α-Ti and [0 1 1] α-Ti samples prepared using different techniques. Results of the presented approach were compared with thickness estimates based on analysis of CBED patterns in two beam conditions. The mean difference between these two methods was 4.1% for the FIB-prepared silicon samples, 5.2% for the electro-chemically polished titanium and 7.9% for Ar{sup +} ion-polished titanium. The proposed techniques can also be employed in other established CBED analyses. Apart from the thickness estimation, it can potentially be used to quantify lattice deformation, structure factors, symmetry, defects or extinction distance. - Highlights: • Automated TEM sample thickness estimation using zone-axis CBED is presented. • Computer vision and artificial intelligence are employed in CBED processing. • This approach reduces operator effort, analysis time and increases repeatability. • Individual parts can be employed in other analyses of CBED/diffraction pattern.

  2. Estimating time to pregnancy from current durations in a cross-sectional sample

    DEFF Research Database (Denmark)

    Keiding, Niels; Kvist, Kajsa; Hartvig, Helle

    2002-01-01

    A new design for estimating the distribution of time to pregnancy is proposed and investigated. The design is based on recording current durations in a cross-sectional sample of women, leading to statistical problems similar to estimating renewal time distributions from backward recurrence times....

  3. A Monte Carlo Sampling Technique for Multi-phonon Processes

    Energy Technology Data Exchange (ETDEWEB)

    Hoegberg, Thure

    1961-12-15

    A sampling technique for selecting scattering angle and energy gain in Monte Carlo calculations of neutron thermalization is described. It is supposed that the scattering is separated into processes involving different numbers of phonons. The number of phonons involved is first determined. Scattering angle and energy gain are then chosen by using special properties of the multi-phonon term.

  4. Special study for the manual transfer of process samples from CPP [Chemical Processing Plant] 601 to RAL [Remote Analytical Laboratory

    International Nuclear Information System (INIS)

    Marts, D.J.

    1987-05-01

    A study of alternate methods to manually transport radioactive samples from their glove boxes to the Remote Analytical Laboratory (RAL) was conducted at the Idaho National Engineering Laboratory. The study was performed to mitigate the effects of a potential loss of sampling capabilities that could take place if a malfunction in the Pneumatic Transfer System (PTS) occurred. Samples are required to be taken from the cell glove boxes and analyzed at the RAL regardless of the operational status of the PTS. This paper documents the conclusions of the study and how a decision was reached that determined the best handling scenarios for manually transporting 15 mL vials of liquid process samples from the K, W, U, WG, or WH cell glove boxes in the Chemical Processing Plant (CPP) 601 to the RAL. This study of methods to manually remove the samples from the glove boxes, package them for safe shipment, transport them by the safest route, receive them at the RAL, and safely unload them was conducted by EG and G Idaho, Inc., for Westinghouse Idaho Nuclear Company as part of the Glove Box Sampling and Transfer System Project for the Fuel Processing Facilities Upgrade, Task 10, Subtask 2. The study focused on the safest and most reliable scenarios that could be implemented using existing equipment. Hardware modifications and new hardware proposals were identified, and their impact on the handling scenario has been evaluated. A conclusion was reached that by utilizing the existing facility hardware, these samples can be safely transported manually from the sample stations in CPP 601 to the RAL, and that additional hardware could facilitate the transportation process even further

  5. Biases in emotional processing are associated with vulnerability to eating disorders over time.

    Science.gov (United States)

    Pringle, A; Harmer, C J; Cooper, M J

    2011-01-01

    Biases in emotional processing are thought to play a role in the maintenance of eating disorders (EDs). In a previous study (Pringle et al., 2010), we were able to demonstrate that biases in the processing of negative self beliefs (a self-schema processing task), facial expressions of emotion (a facial expression recognition task) and information relating to eating, shape and weight (an emotional Stroop) were all predictive of the level of subclinical ED symptoms (used here as a measure of risk) cross-sectionally in a vulnerable sample of dieters. The present study was a 12-month follow up of the participants from Pringle et al. (2010). Longitudinally, greater endorsement of ED relevant and depression relevant negative self beliefs in the self-schema processing task at time 1 was related to subclinical ED systems (level of risk) 12 months later at time 2. Compared to the cross-sectional study, there was no clear relationship between performance on the facial expression recognition task, emotional Stroop task and level of risk 12 months later. Although these findings are preliminary, one tentative interpretation may be that whilst biases in the processing of ED specific stimuli are predictive of level of risk at a given moment, over time less specific stimuli relating to beliefs about the self, including mood related variables, are more closely related to level of risk. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Synchronization of Hierarchical Time-Varying Neural Networks Based on Asynchronous and Intermittent Sampled-Data Control.

    Science.gov (United States)

    Xiong, Wenjun; Patel, Ragini; Cao, Jinde; Zheng, Wei Xing

    In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.

  7. First-Passage-Time Distribution for Variable-Diffusion Processes

    Science.gov (United States)

    Barney, Liberty; Gunaratne, Gemunu H.

    2017-05-01

    First-passage-time distribution, which presents the likelihood of a stock reaching a pre-specified price at a given time, is useful in establishing the value of financial instruments and in designing trading strategies. First-passage-time distribution for Wiener processes has a single peak, while that for stocks exhibits a notable second peak within a trading day. This feature has only been discussed sporadically—often dismissed as due to insufficient/incorrect data or circumvented by conversion to tick time—and to the best of our knowledge has not been explained in terms of the underlying stochastic process. It was shown previously that intra-day variations in the market can be modeled by a stochastic process containing two variable-diffusion processes (Hua et al. in, Physica A 419:221-233, 2015). We show here that the first-passage-time distribution of this two-stage variable-diffusion model does exhibit a behavior similar to the empirical observation. In addition, we find that an extended model incorporating overnight price fluctuations exhibits intra- and inter-day behavior similar to those of empirical first-passage-time distributions.

  8. Automated sampling and data processing derived from biomimetic membranes

    International Nuclear Information System (INIS)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H; Boesen, T P; Emneus, J

    2009-01-01

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  9. Automated sampling and data processing derived from biomimetic membranes

    Energy Technology Data Exchange (ETDEWEB)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H [Aquaporin A/S, Diplomvej 377, DK-2800 Kgs. Lyngby (Denmark); Boesen, T P [Xefion ApS, Kildegaardsvej 8C, DK-2900 Hellerup (Denmark); Emneus, J, E-mail: Claus.Nielsen@fysik.dtu.d [DTU Nanotech, Technical University of Denmark, DK-2800 Kgs. Lyngby (Denmark)

    2009-12-15

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)

  10. In-process weld sampling during hot end welds of type W overpacks

    International Nuclear Information System (INIS)

    Barnes, G.A.

    1998-01-01

    Establish the criteria and process controls to be used in obtaining, testing, and evaluating in-process weld sample during the hot end welding of Type W Overpack capsules used to overpack CsCl capsules for storage at WESF

  11. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    Science.gov (United States)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  12. Enhanced Time Out: An Improved Communication Process.

    Science.gov (United States)

    Nelson, Patricia E

    2017-06-01

    An enhanced time out is an improved communication process initiated to prevent such surgical errors as wrong-site, wrong-procedure, or wrong-patient surgery. The enhanced time out at my facility mandates participation from all members of the surgical team and requires designated members to respond to specified time out elements on the surgical safety checklist. The enhanced time out incorporated at my facility expands upon the safety measures from the World Health Organization's surgical safety checklist and ensures that all personnel involved in a surgical intervention perform a final check of relevant information. Initiating the enhanced time out at my facility was intended to improve communication and teamwork among surgical team members and provide a highly reliable safety process to prevent wrong-site, wrong-procedure, and wrong-patient surgery. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  13. Microfabricated Devices for Sample Extraction, Concentrations, and Related Sample Processing Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Gang; Lin, Yuehe

    2006-12-01

    This is an invited book chapter. As with other analytical techniques, sample pretreatments, sample extraction, sample introduction, and related techniques are of extreme importance for micro-electro-mechanical systems (MEMS). Bio-MEMS devices and systems start with a sampling step. The biological sample then usually undergoes some kinds of sample preparation steps before the actual analysis. These steps may involve extracting the target sample from its matrix, removing interferences from the sample, derivatizing the sample to detectable species, or performing a sample preconcentration step. The integration of the components for sample pretreatment into microfluidic devices represents one of the remaining the bottle-neck towards achieving true miniaturized total analysis systems (?TAS). This chapter provides a thorough state-of-art of the developments in this field to date.

  14. Generalized Fractional Processes with Long Memory and Time Dependent Volatility Revisited

    Directory of Open Access Journals (Sweden)

    M. Shelton Peiris

    2016-09-01

    Full Text Available In recent years, fractionally-differenced processes have received a great deal of attention due to their flexibility in financial applications with long-memory. This paper revisits the class of generalized fractionally-differenced processes generated by Gegenbauer polynomials and the ARMA structure (GARMA with both the long-memory and time-dependent innovation variance. We establish the existence and uniqueness of second-order solutions. We also extend this family with innovations to follow GARCH and stochastic volatility (SV. Under certain regularity conditions, we give asymptotic results for the approximate maximum likelihood estimator for the GARMA-GARCH model. We discuss a Monte Carlo likelihood method for the GARMA-SV model and investigate finite sample properties via Monte Carlo experiments. Finally, we illustrate the usefulness of this approach using monthly inflation rates for France, Japan and the United States.

  15. Discrete time process algebra and the semantics of SDL

    NARCIS (Netherlands)

    J.A. Bergstra; C.A. Middelburg; Y.S. Usenko (Yaroslav)

    1998-01-01

    htmlabstractWe present an extension of discrete time process algebra with relative timing where recursion, propositional signals and conditions, a counting process creation operator, and the state operator are combined. Except the counting process creation operator, which subsumes the original

  16. An empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil.

    Science.gov (United States)

    Kendall, Carl; Kerr, Ligia R F S; Gondim, Rogerio C; Werneck, Guilherme L; Macena, Raimunda Hermelinda Maia; Pontes, Marta Kerr; Johnston, Lisa G; Sabin, Keith; McFarland, Willi

    2008-07-01

    Obtaining samples of populations at risk for HIV challenges surveillance, prevention planning, and evaluation. Methods used include snowball sampling, time location sampling (TLS), and respondent-driven sampling (RDS). Few studies have made side-by-side comparisons to assess their relative advantages. We compared snowball, TLS, and RDS surveys of men who have sex with men (MSM) in Forteleza, Brazil, with a focus on the socio-economic status (SES) and risk behaviors of the samples to each other, to known AIDS cases and to the general population. RDS produced a sample with wider inclusion of lower SES than snowball sampling or TLS-a finding of health significance given the majority of AIDS cases reported among MSM in the state were low SES. RDS also achieved the sample size faster and at lower cost. For reasons of inclusion and cost-efficiency, RDS is the sampling methodology of choice for HIV surveillance of MSM in Fortaleza.

  17. Influence of sampling depth and post-sampling analysis time on the ...

    African Journals Online (AJOL)

    Bacteriological analysis was carried out for samples taken at water depth and at 1, 6, 12 and 24 hours post-sampling. It was observed that the total and faecal coliform bacteria were significantly higher in the 3 m water depth samples than in the surface water samples (ANOVA, F = 59.41, 26.751, 9.82 (T.C); 46.41, 26.81, ...

  18. Simple DNA extraction of urine samples: Effects of storage temperature and storage time.

    Science.gov (United States)

    Ng, Huey Hian; Ang, Hwee Chen; Hoe, See Ying; Lim, Mae-Lynn; Tai, Hua Eng; Soh, Richard Choon Hock; Syn, Christopher Kiu-Choong

    2018-06-01

    Urine samples are commonly analysed in cases with suspected illicit drug consumption. In events of alleged sample mishandling, urine sample source identification may be necessary. A simple DNA extraction procedure suitable for STR typing of urine samples was established on the Promega Maxwell ® 16 paramagnetic silica bead platform. A small sample volume of 1.7mL was used. Samples were stored at room temperature, 4°C and -20°C for 100days to investigate the influence of storage temperature and time on extracted DNA quantity and success rate of STR typing. Samples stored at room temperature exhibited a faster decline in DNA yield with time and lower typing success rates as compared to those at 4°C and -20°C. This trend can likely be attributed to DNA degradation. In conclusion, this study presents a quick and effective DNA extraction protocol from a small urine volume stored for up to 100days at 4°C and -20°C. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Can an inadequate cervical cytology sample in ThinPrep be converted to a satisfactory sample by processing it with a SurePath preparation?

    Science.gov (United States)

    Sørbye, Sveinung Wergeland; Pedersen, Mette Kristin; Ekeberg, Bente; Williams, Merete E Johansen; Sauer, Torill; Chen, Ying

    2017-01-01

    The Norwegian Cervical Cancer Screening Program recommends screening every 3 years for women between 25 and 69 years of age. There is a large difference in the percentage of unsatisfactory samples between laboratories that use different brands of liquid-based cytology. We wished to examine if inadequate ThinPrep samples could be satisfactory by processing them with the SurePath protocol. A total of 187 inadequate ThinPrep specimens from the Department of Clinical Pathology at University Hospital of North Norway were sent to Akershus University Hospital for conversion to SurePath medium. Ninety-one (48.7%) were processed through the automated "gynecologic" application for cervix cytology samples, and 96 (51.3%) were processed with the "nongynecological" automatic program. Out of 187 samples that had been unsatisfactory by ThinPrep, 93 (49.7%) were satisfactory after being converted to SurePath. The rate of satisfactory cytology was 36.6% and 62.5% for samples run through the "gynecology" program and "nongynecology" program, respectively. Of the 93 samples that became satisfactory after conversion from ThinPrep to SurePath, 80 (86.0%) were screened as normal while 13 samples (14.0%) were given an abnormal diagnosis, which included 5 atypical squamous cells of undetermined significance, 5 low-grade squamous intraepithelial lesion, 2 atypical glandular cells not otherwise specified, and 1 atypical squamous cells cannot exclude high-grade squamous intraepithelial lesion. A total of 2.1% (4/187) of the women got a diagnosis of cervical intraepithelial neoplasia 2 or higher at a later follow-up. Converting cytology samples from ThinPrep to SurePath processing can reduce the number of unsatisfactory samples. The samples should be run through the "nongynecology" program to ensure an adequate number of cells.

  20. Effects of Long-Term Storage Time and Original Sampling Month on Biobank Plasma Protein Concentrations

    Directory of Open Access Journals (Sweden)

    Stefan Enroth

    2016-10-01

    Full Text Available The quality of clinical biobank samples is crucial to their value for life sciences research. A number of factors related to the collection and storage of samples may affect the biomolecular composition. We have studied the effect of long-time freezer storage, chronological age at sampling, season and month of the year and on the abundance levels of 108 proteins in 380 plasma samples collected from 106 Swedish women. Storage time affected 18 proteins and explained 4.8–34.9% of the observed variance. Chronological age at sample collection after adjustment for storage-time affected 70 proteins and explained 1.1–33.5% of the variance. Seasonal variation had an effect on 15 proteins and month (number of sun hours affected 36 proteins and explained up to 4.5% of the variance after adjustment for storage-time and age. The results show that freezer storage time and collection date (month and season exerted similar effect sizes as age on the protein abundance levels. This implies that information on the sample handling history, in particular storage time, should be regarded as equally prominent covariates as age or gender and need to be included in epidemiological studies involving protein levels.

  1. Scheduling sampling to maximize information about time dependence in experiments with limited resources

    DEFF Research Database (Denmark)

    Græsbøll, Kaare; Christiansen, Lasse Engbo

    2013-01-01

    Looking for periodicity in sampled data requires that periods (lags) of different length are represented in the sampling plan. We here present a method to assist in planning of temporal studies with sparse resources, which optimizes the number of observed time lags for a fixed amount of samples w...

  2. Order–disorder–reorder process in thermally treated dolomite samples

    DEFF Research Database (Denmark)

    Zucchini, Azzurra; Comodi, Paola; Katerinopoulou, Anna

    2012-01-01

    A combined powder and single-crystal X-ray diffraction analysis of dolomite [CaMg(CO3)2] heated to 1,200oC at 3 GPa was made to study the order–disorder–reorder process. The order/disorder transition is inferred to start below 1,100oC, and complete disorder is attained at approximately 1,200o......C. Twinned crystals characterized by high internal order were found in samples annealed over 1,100oC, and their fraction was found to increase with temperature. Evidences of twinning domains combined with probable remaining disordered portions of the structure imply that reordering processes occur during...

  3. Marine sediment sample pre-processing for macroinvertebrates metabarcoding: mechanical enrichment and homogenization

    Directory of Open Access Journals (Sweden)

    Eva Aylagas

    2016-10-01

    Full Text Available Metabarcoding is an accurate and cost-effective technique that allows for simultaneous taxonomic identification of multiple environmental samples. Application of this technique to marine benthic macroinvertebrate biodiversity assessment for biomonitoring purposes requires standardization of laboratory and data analysis procedures. In this context, protocols for creation and sequencing of amplicon libraries and their related bioinformatics analysis have been recently published. However, a standardized protocol describing all previous steps (i.e. processing and manipulation of environmental samples for macroinvertebrate community characterization is lacking. Here, we provide detailed procedures for benthic environmental sample collection, processing, enrichment for macroinvertebrates, homogenization, and subsequent DNA extraction for metabarcoding analysis. Since this is the first protocol of this kind, it should be of use to any researcher in this field, having the potential for improvement.

  4. In-line near real time monitoring of fluid streams in separation processes for used nuclear fuel - 5146

    International Nuclear Information System (INIS)

    Nee, K.; Nilsson, M.

    2015-01-01

    Applying spectroscopic tools for chemical processes has been intensively studied in various industries owing to its rapid and non-destructive analysis for detecting chemical components and determine physical characteristic in a process stream. The general complexity of separation processes for used nuclear fuel, e.g., chemical speciation, temperature variations, and prominent process security and safety concerns, require a well-secured and robust monitoring system to provide precise information of the process streams at real time without interference. Multivariate analysis accompanied with spectral measurements is a powerful statistic technique that can be used to monitor this complex chemical system. In this work, chemometric models that respond to the chemical components in the fluid samples were calibrated and validated to establish an in-line near real time monitoring system. The models show good prediction accuracy using partial least square regression analysis on the spectral data obtained from UV/Vis/NIR spectroscopies. The models were tested on a solvent extraction process using a single stage centrifugal contactor in our laboratory to determine the performance of an in-line near real time monitoring system. (authors)

  5. Design and development of a highly sensitive, field portable plasma source instrument for on-line liquid stream monitoring and real-time sample analysis

    International Nuclear Information System (INIS)

    Duan, Yixiang; Su, Yongxuan; Jin, Zhe; Abeln, Stephen P.

    2000-01-01

    The development of a highly sensitive, field portable, low-powered instrument for on-site, real-time liquid waste stream monitoring is described in this article. A series of factors such as system sensitivity and portability, plasma source, sample introduction, desolvation system, power supply, and the instrument configuration, were carefully considered in the design of the portable instrument. A newly designed, miniature, modified microwave plasma source was selected as the emission source for spectroscopy measurement, and an integrated small spectrometer with a charge-coupled device detector was installed for signal processing and detection. An innovative beam collection system with optical fibers was designed and used for emission signal collection. Microwave plasma can be sustained with various gases at relatively low power, and it possesses high detection capabilities for both metal and nonmetal pollutants, making it desirable to use for on-site, real-time, liquid waste stream monitoring. An effective in situ sampling system was coupled with a high efficiency desolvation device for direct-sampling liquid samples into the plasma. A portable computer control system is used for data processing. The new, integrated instrument can be easily used for on-site, real-time monitoring in the field. The system possesses a series of advantages, including high sensitivity for metal and nonmetal elements; in situ sampling; compact structure; low cost; and ease of operation and handling. These advantages will significantly overcome the limitations of previous monitoring techniques and make great contributions to environmental restoration and monitoring. (c)

  6. Simulation of Simple Controlled Processes with Dead-Time.

    Science.gov (United States)

    Watson, Keith R.; And Others

    1985-01-01

    The determination of closed-loop response of processes containing dead-time is typically not covered in undergraduate process control, possibly because the solution by Laplace transforms requires the use of Pade approximation for dead-time, which makes the procedure lengthy and tedious. A computer-aided method is described which simplifies the…

  7. A sampling approach to constructing Lyapunov functions for nonlinear continuous–time systems

    NARCIS (Netherlands)

    Bobiti, R.V.; Lazar, M.

    2016-01-01

    The problem of constructing a Lyapunov function for continuous-time nonlinear dynamical systems is tackled in this paper via a sampling-based approach. The main idea of the sampling-based method is to verify a Lyapunov-type inequality for a finite number of points (known state vectors) in the

  8. Immunosuppressant therapeutic drug monitoring by LC-MS/MS: workflow optimization through automated processing of whole blood samples.

    Science.gov (United States)

    Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario

    2013-11-01

    Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.

  9. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  10. Real-time multiparameter pulse processing with decision tables

    International Nuclear Information System (INIS)

    Hull, K.; Griffin, H.

    1986-01-01

    Decision tables offer several advantages over other real-time multiparameter, data processing techniques. These include very high collection rates, minimum number of computer instructions, rates independent of the number of conditions applied per parameter, ease of adding or removing conditions during a session, and simplicity of implementation. Decisions table processing is important in multiparameter nuclear spectroscopy, coincidence experiments, multiparameter pulse processing (HgI 2 resolution enhancement, pulse discrimination, timing spectroscopy), and other applications can be easily implemented. (orig.)

  11. Sampling rare fluctuations of discrete-time Markov chains

    Science.gov (United States)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  12. Recent results of the investigation of a micro-fluidic sampling chip and sampling system for hot cell aqueous processing streams

    International Nuclear Information System (INIS)

    Tripp, J.; Smith, T.; Law, J.

    2013-01-01

    A Fuel Cycle Research and Development project has investigated an innovative sampling method that could evolve into the next generation sampling and analysis system for metallic elements present in aqueous processing streams. Initially sampling technologies were evaluated and micro-fluidic sampling chip technology was selected and tested. A conceptual design for a fully automated microcapillary-based system was completed and a robotic automated sampling system was fabricated. The mechanical and sampling operation of the completed sampling system was investigated. Different sampling volumes have been tested. It appears that the 10 μl volume has produced data that had much smaller relative standard deviations than the 2 μl volume. In addition, the production of a less expensive, mass produced sampling chip was investigated to avoid chip reuse thus increasing sampling reproducibility/accuracy. The micro-fluidic-based robotic sampling system's mechanical elements were tested to ensure analytical reproducibility and the optimum robotic handling of micro-fluidic sampling chips. (authors)

  13. Synchronization of a Class of Memristive Stochastic Bidirectional Associative Memory Neural Networks with Mixed Time-Varying Delays via Sampled-Data Control

    Directory of Open Access Journals (Sweden)

    Manman Yuan

    2018-01-01

    Full Text Available The paper addresses the issue of synchronization of memristive bidirectional associative memory neural networks (MBAMNNs with mixed time-varying delays and stochastic perturbation via a sampled-data controller. First, we propose a new model of MBAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying distributed delays and discrete delays. Second, we design a new method of sampled-data control for the stochastic MBAMNNs. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the methods are carefully designed to confirm the synchronization processes are suitable for the feather of the memristor. Third, sufficient criteria guaranteeing the synchronization of the systems are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.

  14. Delayed matching-to-sample: A tool to assess memory and other cognitive processes in pigeons.

    Science.gov (United States)

    Zentall, Thomas R; Smith, Aaron P

    2016-02-01

    Delayed matching-to-sample is a versatile task that has been used to assess the nature of animal memory. Although once thought to be a relatively passive process, matching research has demonstrated considerable flexibility in how animals actively represent events in memory. But delayed matching can also demonstrate how animals fail to maintain representations in memory when they are cued that they will not be tested (directed forgetting) and how the outcome expected can serve as a choice cue. When pigeons have shown divergent retention functions following training without a delay, it has been taken as evidence of the use of a single-code/default coding strategy but in many cases an alternative account may be involved. Delayed matching has also been used to investigate equivalence learning (how animals represent stimuli when they learn that the same comparison response is correct following the presentation of two different samples) and to test for metamemory (the ability of pigeons to indicate that they understand what they know) by allowing animals to decline to be tested when they are uncertain that they remember a stimulus. How animals assess the passage of time has also been studied using the matching task. And there is evidence that when memory for the sample is impaired by a delay, rather than use the probability of being correct for choice of each of the comparison stimuli, pigeons tend to choose based on the overall sample frequency (base-rate neglect). Finally, matching has been used to identify natural color categories as well as dimensional categories in pigeons. Overall, matching to sample has provided an excellent methodology for assessing an assortment of cognitive processes in animals. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Discovering biological progression underlying microarray samples.

    Directory of Open Access Journals (Sweden)

    Peng Qiu

    2011-04-01

    Full Text Available In biological systems that undergo processes such as differentiation, a clear concept of progression exists. We present a novel computational approach, called Sample Progression Discovery (SPD, to discover patterns of biological progression underlying microarray gene expression data. SPD assumes that individual samples of a microarray dataset are related by an unknown biological process (i.e., differentiation, development, cell cycle, disease progression, and that each sample represents one unknown point along the progression of that process. SPD aims to organize the samples in a manner that reveals the underlying progression and to simultaneously identify subsets of genes that are responsible for that progression. We demonstrate the performance of SPD on a variety of microarray datasets that were generated by sampling a biological process at different points along its progression, without providing SPD any information of the underlying process. When applied to a cell cycle time series microarray dataset, SPD was not provided any prior knowledge of samples' time order or of which genes are cell-cycle regulated, yet SPD recovered the correct time order and identified many genes that have been associated with the cell cycle. When applied to B-cell differentiation data, SPD recovered the correct order of stages of normal B-cell differentiation and the linkage between preB-ALL tumor cells with their cell origin preB. When applied to mouse embryonic stem cell differentiation data, SPD uncovered a landscape of ESC differentiation into various lineages and genes that represent both generic and lineage specific processes. When applied to a prostate cancer microarray dataset, SPD identified gene modules that reflect a progression consistent with disease stages. SPD may be best viewed as a novel tool for synthesizing biological hypotheses because it provides a likely biological progression underlying a microarray dataset and, perhaps more importantly, the

  16. The Maia Spectroscopy Detector System: Engineering for Integrated Pulse Capture, Low-Latency Scanning and Real-Time Processing

    International Nuclear Information System (INIS)

    Kirkham, R.; Siddons, D.; Dunn, P.A.; Kuczewski, A.J.; Dodanwela, R.; Moorhead, G.F.; Ryan, C.G.; De Geronimo, G.; Beuttenmuller, R.; Pinelli, D.; Pfeffer, M.; Davey, P.; Jensen, M.; de Jonge, M.D.; Howard, D.L.; Kusel, M.; McKinlay, J.

    2010-01-01

    The Maia detector system is engineered for energy dispersive x-ray fluorescence spectroscopy and elemental imaging at photon rates exceeding 10 7 /s, integrated scanning of samples for pixel transit times as small as 50 (micro)s and high definition images of 10 8 pixels and real-time processing of detected events for spectral deconvolution and online display of pure elemental images. The system developed by CSIRO and BNL combines a planar silicon 384 detector array, application-specific integrated circuits for pulse shaping and peak detection and sampling and optical data transmission to an FPGA-based pipelined, parallel processor. This paper describes the system and the underpinning engineering solutions.

  17. Development of time sensitivity and information processing speed.

    Directory of Open Access Journals (Sweden)

    Sylvie Droit-Volet

    Full Text Available The aim of this study was to examine whether age-related changes in the speed of information processing are the best predictors of the increase in sensitivity to time throughout childhood. Children aged 5 and 8 years old, as well adults, were given two temporal bisection tasks, one with short (0.5/1-s and the other with longer (4/8-s anchor durations. In addition, the participants' scores on different neuropsychological tests assessing both information processing speed and other dimensions of cognitive control (short-term memory, working memory, selective attention were calculated. The results showed that the best predictor of individual variances in sensitivity to time was information processing speed, although working memory also accounted for some of the individual differences in time sensitivity, albeit to a lesser extent. In sum, the faster the information processing speed of the participants, the higher their sensitivity to time was. These results are discussed in the light of the idea that the development of temporal capacities has its roots in the maturation of the dynamic functioning of the brain.

  18. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  19. Histogram bin width selection for time-dependent Poisson processes

    International Nuclear Information System (INIS)

    Koyama, Shinsuke; Shinomoto, Shigeru

    2004-01-01

    In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method

  20. Histogram bin width selection for time-dependent Poisson processes

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Shinsuke; Shinomoto, Shigeru [Department of Physics, Graduate School of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan)

    2004-07-23

    In constructing a time histogram of the event sequences derived from a nonstationary point process, we wish to determine the bin width such that the mean squared error of the histogram from the underlying rate of occurrence is minimized. We find that the optimal bin widths obtained for a doubly stochastic Poisson process and a sinusoidally regulated Poisson process exhibit different scaling relations with respect to the number of sequences, time scale and amplitude of rate modulation, but both diverge under similar parametric conditions. This implies that under these conditions, no determination of the time-dependent rate can be made. We also apply the kernel method to these point processes, and find that the optimal kernels do not exhibit any critical phenomena, unlike the time histogram method.

  1. Impact of sample processing on the measurement of circulating microparticles: storage and centrifugation parameters.

    Science.gov (United States)

    Vila-Liante, Virtudes; Sánchez-López, Verónica; Martínez-Sales, Vicenta; Ramón-Nuñez, Luis A; Arellano-Orden, Elena; Cano-Ruiz, Alejandra; Rodríguez-Martorell, Francisco J; Gao, Lin; Otero-Candelera, Remedios

    2016-11-01

    Microparticles (MPs) have been shown to be markers of cellular activation and interactions. Pre-analytical conditions such as the centrifugation protocol and sample storage conditions represent an important source of variability in determining MPs values. The objectives of this study were to evaluate the influence of sample storage conditions and centrifugation speed and temperature on the determination of MPs in plasma. Citrate-anticoagulated blood samples obtained from 21 healthy subjects were centrifuged under four different protocols involving different speeds (2500 g or 1500 g) and temperatures (4 °C or 20 °C) to isolate platelet-poor plasma (PPP). The number of MPs in fresh and frozen-thawed PPP were analyzed by flow cytometry, and MPs-mediated procoagulant activity was determined by a thrombin generation test and phospholipid-dependent procoagulant tests. The number of MPs and their procoagulant activity were affected by freeze-thaw cycling and centrifugation speed but not by centrifugation temperature. Sample freezing increased MPs number (six-fold) and thrombin generation (four-fold), and decreased clotting time (two-fold). Low centrifugation speed caused an increase in MPs number and a parallel increase in MP-mediated procoagulant activity. Sample storage conditions and centrifugation speed are important processing conditions affecting MPs number and activity. Before any study, the protocol for MPs isolation should be optimized to ensure a reliable characterization of MPs, which could provide important information for diagnostic purposes and for understanding the pathogenesis of diseases.

  2. Collection and processing of plant, animal and soil samples from Bikini, Enewetak and Rongelap Atolls

    Energy Technology Data Exchange (ETDEWEB)

    Stuart, M.L.

    1995-09-01

    The United States used the Marshall Islands for its nuclear weapons program testing site from 1946 to 1958. The BRAVO test was detonated at Bikini Atoll on March 1, 1954. Due to shifting wind conditions at the time of the nuclear detonation, many of the surrounding Atolls became contaminated with fallout (radionuclides carried by the wind currents). Lawrence Livermore National Laboratory`s (LLNL) Marshall Islands Project has been responsible for the collecting, processing, and analyzing of food crops, vegetation, soil, water, animals, and marine species to characterize the radionuclides in the environment, and to estimate dose at atolls that may have been contaminated. Tropical agriculture experiments reducing the uptake of {sup 137}Cs have been conducted on Bikini Atoll. The Marshall Islands field team and laboratory processing team play an important role in the overall scheme of the Marshall Islands Dose Assessment and Radioecology Project. This report gives a general description of the Marshall Islands field sampling and laboratory processing procedures currently used by our staff.

  3. Collection and processing of plant, animal and soil samples from Bikini, Enewetak and Rongelap Atolls

    International Nuclear Information System (INIS)

    Stuart, M.L.

    1995-09-01

    The United States used the Marshall Islands for its nuclear weapons program testing site from 1946 to 1958. The BRAVO test was detonated at Bikini Atoll on March 1, 1954. Due to shifting wind conditions at the time of the nuclear detonation, many of the surrounding Atolls became contaminated with fallout (radionuclides carried by the wind currents). Lawrence Livermore National Laboratory's (LLNL) Marshall Islands Project has been responsible for the collecting, processing, and analyzing of food crops, vegetation, soil, water, animals, and marine species to characterize the radionuclides in the environment, and to estimate dose at atolls that may have been contaminated. Tropical agriculture experiments reducing the uptake of 137 Cs have been conducted on Bikini Atoll. The Marshall Islands field team and laboratory processing team play an important role in the overall scheme of the Marshall Islands Dose Assessment and Radioecology Project. This report gives a general description of the Marshall Islands field sampling and laboratory processing procedures currently used by our staff

  4. A real-time dashboard for managing pathology processes

    Directory of Open Access Journals (Sweden)

    Fawaz Halwani

    2016-01-01

    Full Text Available Context: The Eastern Ontario Regional Laboratory Association (EORLA is a newly established association of all the laboratory and pathology departments of Eastern Ontario that currently includes facilities from eight hospitals. All surgical specimens for EORLA are processed in one central location, the Department of Pathology and Laboratory Medicine (DPLM at The Ottawa Hospital (TOH, where the rapid growth and influx of surgical and cytology specimens has created many challenges in ensuring the timely processing of cases and reports. Although the entire process is maintained and tracked in a clinical information system, this system lacks pre-emptive warnings that can help management address issues as they arise. Aims: Dashboard technology provides automated, real-time visual clues that could be used to alert management when a case or specimen is not being processed within predefined time frames. We describe the development of a dashboard helping pathology clinical management to make informed decisions on specimen allocation and tracking. Methods: The dashboard was designed and developed in two phases, following a prototyping approach. The first prototype of the dashboard helped monitor and manage pathology processes at the DPLM. Results: The use of this dashboard helped to uncover operational inefficiencies and contributed to an improvement of turn-around time within The Ottawa Hospital′s DPML. It also allowed the discovery of additional requirements, leading to a second prototype that provides finer-grained, real-time information about individual cases and specimens. Conclusion: We successfully developed a dashboard that enables managers to address delays and bottlenecks in specimen allocation and tracking. This support ensures that pathology reports are provided within time frame standards required for high-quality patient care. Given the importance of rapid diagnostics for a number of diseases, the use of real-time dashboards within

  5. A real-time dashboard for managing pathology processes.

    Science.gov (United States)

    Halwani, Fawaz; Li, Wei Chen; Banerjee, Diponkar; Lessard, Lysanne; Amyot, Daniel; Michalowski, Wojtek; Giffen, Randy

    2016-01-01

    The Eastern Ontario Regional Laboratory Association (EORLA) is a newly established association of all the laboratory and pathology departments of Eastern Ontario that currently includes facilities from eight hospitals. All surgical specimens for EORLA are processed in one central location, the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital (TOH), where the rapid growth and influx of surgical and cytology specimens has created many challenges in ensuring the timely processing of cases and reports. Although the entire process is maintained and tracked in a clinical information system, this system lacks pre-emptive warnings that can help management address issues as they arise. Dashboard technology provides automated, real-time visual clues that could be used to alert management when a case or specimen is not being processed within predefined time frames. We describe the development of a dashboard helping pathology clinical management to make informed decisions on specimen allocation and tracking. The dashboard was designed and developed in two phases, following a prototyping approach. The first prototype of the dashboard helped monitor and manage pathology processes at the DPLM. The use of this dashboard helped to uncover operational inefficiencies and contributed to an improvement of turn-around time within The Ottawa Hospital's DPML. It also allowed the discovery of additional requirements, leading to a second prototype that provides finer-grained, real-time information about individual cases and specimens. We successfully developed a dashboard that enables managers to address delays and bottlenecks in specimen allocation and tracking. This support ensures that pathology reports are provided within time frame standards required for high-quality patient care. Given the importance of rapid diagnostics for a number of diseases, the use of real-time dashboards within pathology departments could contribute to improving the quality of

  6. A Real-Time PCR Detection of Genus Salmonella in Meat and Milk Samples

    Directory of Open Access Journals (Sweden)

    Jaroslav Pochop

    2013-05-01

    Full Text Available The aim of this study was follow the contamination of ready to eat milk and meat products with Salmonella spp. by using the Step One real-time PCR. Classical microbiological methods for detection of food-borne bacteria involve the use of pre-enrichment and/or specific enrichment, followed by the isolation of the bacteria in solid media and a final confirmation by biochemical and/or serological tests. We used the PrepSEQ Rapid Spin Sample Preparation Kit for isolation of DNA and SensiFAST SYBR Hi-ROX Kit for the real-time PCR performance. In the investigated samples without incubation we could detect strain of Salmonella sp. in five out of twenty three samples (swabs. This Step One real-time PCR assay is extremely useful for any laboratory in possession of a real-time PCR. It is a fast, reproducible, simple, specific and sensitive way to detect nucleic acids, which could be used in clinical diagnostic tests in the future. Our results indicated that the Step One real-time PCR assay developed in this study could sensitively detect Salmonella spp. in ready to eat food.

  7. Sampling Methodologies for Epidemiologic Surveillance of Men Who Have Sex with Men and Transgender Women in Latin America: An Empiric Comparison of Convenience Sampling, Time Space Sampling, and Respondent Driven Sampling

    OpenAIRE

    Clark, J. L.; Konda, K. A.; Silva-Santisteban, A.; Peinado, J.; Lama, J. R.; Kusunoki, L.; Perez-Brumer, A.; Pun, M.; Cabello, R.; Sebastian, J. L.; Suarez-Ognio, L.; Sanchez, J.

    2014-01-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June-August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through T...

  8. Optoelectronic time-domain characterization of a 100 GHz sampling oscilloscope

    International Nuclear Information System (INIS)

    Füser, H; Baaske, K; Kuhlmann, K; Judaschke, R; Pierz, K; Bieler, M; Eichstädt, S; Elster, C

    2012-01-01

    We have carried out an optoelectronic measurement of the impulse response of an ultrafast sampling oscilloscope with a nominal bandwidth of 100 GHz within a time window of approximately 100 ps. Our experimental technique also considers frequency components above the cut-off frequency of higher order modes of the 1.0 mm coaxial line, which is shown to be important for the specification of the impulse response of ultrafast sampling oscilloscopes. Additionally, we have measured the reflection coefficient of the sampling head induced by the mismatch of the sampling circuit and the coaxial connector which is larger than 0.5 for certain frequencies. The uncertainty analysis has been performed using the Monte Carlo method of Supplement 1 to the 'Guide to the Expression of Uncertainty in Measurement' and correlations in the estimated impulse response have been determined. Our measurements extend previous work which deals with the characterization of 70 GHz oscilloscopes and the measurement of 100 GHz oscilloscopes up to the cut-off frequency of higher order modes

  9. Cross-sample entropy of foreign exchange time series

    Science.gov (United States)

    Liu, Li-Zhi; Qian, Xi-Yuan; Lu, Heng-Yao

    2010-11-01

    The correlation of foreign exchange rates in currency markets is investigated based on the empirical data of DKK/USD, NOK/USD, CAD/USD, JPY/USD, KRW/USD, SGD/USD, THB/USD and TWD/USD for a period from 1995 to 2002. Cross-SampEn (cross-sample entropy) method is used to compare the returns of every two exchange rate time series to assess their degree of asynchrony. The calculation method of confidence interval of SampEn is extended and applied to cross-SampEn. The cross-SampEn and its confidence interval for every two of the exchange rate time series in periods 1995-1998 (before the Asian currency crisis) and 1999-2002 (after the Asian currency crisis) are calculated. The results show that the cross-SampEn of every two of these exchange rates becomes higher after the Asian currency crisis, indicating a higher asynchrony between the exchange rates. Especially for Singapore, Thailand and Taiwan, the cross-SampEn values after the Asian currency crisis are significantly higher than those before the Asian currency crisis. Comparison with the correlation coefficient shows that cross-SampEn is superior to describe the correlation between time series.

  10. Time-series panel analysis (TSPA): multivariate modeling of temporal associations in psychotherapy process.

    Science.gov (United States)

    Ramseyer, Fabian; Kupper, Zeno; Caspar, Franz; Znoj, Hansjörg; Tschacher, Wolfgang

    2014-10-01

    Processes occurring in the course of psychotherapy are characterized by the simple fact that they unfold in time and that the multiple factors engaged in change processes vary highly between individuals (idiographic phenomena). Previous research, however, has neglected the temporal perspective by its traditional focus on static phenomena, which were mainly assessed at the group level (nomothetic phenomena). To support a temporal approach, the authors introduce time-series panel analysis (TSPA), a statistical methodology explicitly focusing on the quantification of temporal, session-to-session aspects of change in psychotherapy. TSPA-models are initially built at the level of individuals and are subsequently aggregated at the group level, thus allowing the exploration of prototypical models. TSPA is based on vector auto-regression (VAR), an extension of univariate auto-regression models to multivariate time-series data. The application of TSPA is demonstrated in a sample of 87 outpatient psychotherapy patients who were monitored by postsession questionnaires. Prototypical mechanisms of change were derived from the aggregation of individual multivariate models of psychotherapy process. In a 2nd step, the associations between mechanisms of change (TSPA) and pre- to postsymptom change were explored. TSPA allowed a prototypical process pattern to be identified, where patient's alliance and self-efficacy were linked by a temporal feedback-loop. Furthermore, therapist's stability over time in both mastery and clarification interventions was positively associated with better outcomes. TSPA is a statistical tool that sheds new light on temporal mechanisms of change. Through this approach, clinicians may gain insight into prototypical patterns of change in psychotherapy. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Wuchereria bancrofti in Tanzania: microfilarial periodicity and effect of blood sampling time on microfilarial intensities

    DEFF Research Database (Denmark)

    Simonsen, Poul Erik; Niemann, L.; Meyrowitsch, Dan Wolf

    1997-01-01

    The circadian periodicity of Wuchereria bancrofti microfilarial (mf) intensities in peripheral blood was analysed in a group of infected individuals from an endemic community in north-eastern Tanzania. The mf density was quantified at two-hourly intervals for 24 hours. A clear nocturnal periodic...... of blood sampling before peak time is discussed, and the importance of taking sampling time into consideration when analysing data from epidemiological studies is emphasized. A simple method is devised which can be used to adjust for the influence of time on mf intensities, in studies where accurate...... information on mf intensities is necessary, and where it is impossible to obtain all samples at peak time....

  12. Reducing Design Cycle Time and Cost Through Process Resequencing

    Science.gov (United States)

    Rogers, James L.

    2004-01-01

    In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.

  13. Real-time electrical impedimetric monitoring of blood coagulation process under temperature and hematocrit variations conducted in a microfluidic chip.

    Directory of Open Access Journals (Sweden)

    Kin Fong Lei

    Full Text Available Blood coagulation is an extremely complicated and dynamic physiological process. Monitoring of blood coagulation is essential to predict the risk of hemorrhage and thrombosis during cardiac surgical procedures. In this study, a high throughput microfluidic chip has been developed for the investigation of the blood coagulation process under temperature and hematocrit variations. Electrical impedance of the whole blood was continuously recorded by on-chip electrodes in contact with the blood sample during coagulation. Analysis of the impedance change of the blood was conducted to investigate the characteristics of blood coagulation process and the starting time of blood coagulation was defined. The study of blood coagulation time under temperature and hematocrit variations was shown a good agreement with results in the previous clinical reports. The electrical impedance measurement for the definition of blood coagulation process provides a fast and easy measurement technique. The microfluidic chip was shown to be a sensitive and promising device for monitoring blood coagulation process even in a variety of conditions. It is found valuable for the development of point-of-care coagulation testing devices that utilizes whole blood sample in microliter quantity.

  14. Insights into explosion dynamics at Stromboli in 2009 from ash samples collected in real-time

    Science.gov (United States)

    Taddeucci, J.; Lautze, N.; Andronico, D.; D'Auria, L.; Niemeijer, A.; Houghton, B.; Scarlato, P.

    2012-04-01

    Rapid characterization of tephra during explosive eruptions can provide valuable insights into eruptive mechanisms, also integrating other monitoring systems. Here we reveal a perspective on Stromboli's conduit processes by linking ash textures to geophysical estimates of eruption parameters of observed explosions. A three day campaign at Stromboli was undertaken by Italy's Istituto Nazionale di Geofisica e Vulcanologia (INGV) in October 2009. At this time activity was moderately intense, with an average 4 to 5, both ash-rich and ash-poor, explosions per hour at each the SW and NE vents. A total of fifteen ash samples were collected in real time. We used binocular and scanning electron microscopes to analyze the components, grain size and morphology distributions, and surface chemistry of ash particles within eight selected samples. In addition, the INGV monitoring network provided visual, thermal, and seismic information on the explosions that generated the sampled ash. In each sample, the proportion of fluidal, glassy sideromelane (as opposed to blocky, microcrystalline tachylite plus lithics), the degree of "chemical freshness" (as opposed to chemical alteration), and the average size of particles appear to correlate directly with the maximum height and the seismic amplitude of the corresponding explosion, and inversely correlate with the amount of ash erupted, as estimated by monitoring videos. These observations suggest that more violent explosions (i.e., those driven by the release of larger and more pressurized gas volumes) produce ash via the fragmentation of hotter, more fluid magma, while weaker ones mostly erupt ash-sized particles derived by the fragmentation of colder magma and incorporation of conduit wall debris. The formation of fluidal ash particles (up to Pele's hairs) requires aerodynamic deformation of a relatively low-viscosity magma, in agreement with the strong acceleration imposed upon fragmented magma clots by the rapid expansion of

  15. Survey of real-time processing systems for big data

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Lftikhar, Nadeem; Xie, Xike

    2014-01-01

    In recent years, real-time processing and analytics systems for big data–in the context of Business Intelligence (BI)–have received a growing attention. The traditional BI platforms that perform regular updates on daily, weekly or monthly basis are no longer adequate to satisfy the fast......-changing business environments. However, due to the nature of big data, it has become a challenge to achieve the real-time capability using the traditional technologies. The recent distributed computing technology, MapReduce, provides off-the-shelf high scalability that can significantly shorten the processing time...... for big data; Its open-source implementation such as Hadoop has become the de-facto standard for processing big data, however, Hadoop has the limitation of supporting real-time updates. The improvements in Hadoop for the real-time capability, and the other alternative real-time frameworks have been...

  16. Contribution to the stochastically studies of space-time dependable hydrological processes

    International Nuclear Information System (INIS)

    Kjaevski, Ivancho

    2002-12-01

    One of the fundaments of today's planning and water economy is Science of Hydrology. Science of Hydrology through the history had followed the development of the water management systems. Water management systems, during the time from single-approach evolved to complex and multi purpose systems. The dynamic and development of the today's society contributed for increasing the demand of clean water, and in the same time, the resources of clean water in the nature are reduced. In this kind of conditions, water management systems should resolve problems that are more complicated during managing of water sources. Solving the problems in water management, enable development and applying new methods and technologies in planning and management with water resources and water management systems like: systematical analyses, operational research, hierarchy decisions, expert systems, computer technology etc. Planning and management of water sources needs historical measured data for hydro metrological processes. In our country there are data of hydro metrological processes in period of 50-70, but in some Europe countries there are data more than 100 years. Water economy trends follow the hydro metrological trend research. The basic statistic techniques like sampling, probability distribution function, correlation and regression, are used about one intended and simple water management problems. Solving new problems about water management needs using of space-time stochastic technique, modem mathematical and statistical techniques during simulation and optimization of complex water systems. We need tree phases of development of the techniques to get secure hydrological models: i) Estimate the quality of hydro meteorological data, analyzing of their consistency, and homogeneous; ii) Structural analyze of hydro meteorological processes; iii) Mathematical models for modeling hydro meteorological processes. Very often, the third phase is applied for analyzing and modeling of hydro

  17. Neutron-activation analysis of routine mineral-processing samples

    International Nuclear Information System (INIS)

    Watterson, J.; Eddy, B.; Pearton, D.

    1974-01-01

    Instrumental neutron-activation analysis was applied to a suite of typical mineral-processing samples to establish which elements can be rapidly determined in them by this technique. A total of 35 elements can be determined with precisions (from the counting statistics) ranging from better than 1 per cent to approximately 20 per cent. The elements that can be determined have been tabulated together with the experimental conditions, the precision from the counting statistics, and the estimated number of analyses possible per day. With an automated system, this number can be as high as 150 in the most favourable cases [af

  18. Pippi — Painless parsing, post-processing and plotting of posterior and likelihood samples

    Science.gov (United States)

    Scott, Pat

    2012-11-01

    Interpreting samples from likelihood or posterior probability density functions is rarely as straightforward as it seems it should be. Producing publication-quality graphics of these distributions is often similarly painful. In this short note I describe pippi, a simple, publicly available package for parsing and post-processing such samples, as well as generating high-quality PDF graphics of the results. Pippi is easily and extensively configurable and customisable, both in its options for parsing and post-processing samples, and in the visual aspects of the figures it produces. I illustrate some of these using an existing supersymmetric global fit, performed in the context of a gamma-ray search for dark matter. Pippi can be downloaded and followed at http://github.com/patscott/pippi.

  19. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Guignard, P.A.; Chan, W. (Royal Melbourne Hospital, Parkville (Australia). Dept. of Nuclear Medicine)

    1984-09-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned.

  20. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    International Nuclear Information System (INIS)

    Guignard, P.A.; Chan, W.

    1984-01-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned. (author)

  1. Defining an optimum pumping-time requirement for sampling ground-water wells on the Hanford site

    International Nuclear Information System (INIS)

    Scharnhorst, N.L.

    1982-04-01

    The objective was to determine the optimum time period necessary to pump water from a well before a representative sample of the ground water can be obtained. It was assumed that a representative sample has been collected if the concentration of chemical parameters is the same in a number of samples taken consecutively, so that the concentration of parameters does not vary with time of collection. Ground-water samples used in this project were obtained by pumping selected wells on the Hanford Site. At each well, samples were taken at two minute intervals, and on each sample various chemical analyses were performed. Samples were checked for pH, sulfate, iron, specific conductivity, chloride, nitrate and alkalinity. The data showed that pH, alkalinity, sulfate and specific conductivity levels stabilized almost immediately after pumping of the well began. In many wells, the chloride and nitrate levels were unstable throughout the 38-minute sampling period. Iron levels, however, did not behave in either fashion. The concentration of iron in the samples was high when pumping began but dropped rapidly as pumping continued. The best explanation for this is that iron is flushed from the sides of the casing into the well when pumping begins. After several minutes of pumping, most of the dissolved iron is washed from the well casing and the iron concentration reaches a stable plateau representative of the iron concentration in the ground water.Since iron concentration takes longest to stabilize, the optimum pumping time for a well is based on the iron stabilization time for that well

  2. The Identity Process in Times of Ruptures

    DEFF Research Database (Denmark)

    Awad, Sarah H.

    2016-01-01

    revolution, at which time Egypt has witnessed major social and political changes. The aim is to understand the identity process of individuals as they develop and adapt through changing social contexts and how they create alternative social relations as they engage in prefigurative politics. The findings...... of agency in individuals’ ability to create new meanings of their world in spite of the socio-cultural and political constraints. This study presents narratives as an informing methodological resource that connects identity process with social representations and emphasizes the value of storytelling......This is a longitudinal study of the identity process through times of dramatic social change. Using a narrative psychological approach this research follows the life stories of five Egyptian bloggers as they write their stories on online blogs over the course of the three years following the 2011...

  3. Planck 2015 results: VII. High Frequency Instrument data processing: Time-ordered information and beams

    International Nuclear Information System (INIS)

    Adam, R.; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.

    2016-01-01

    The Planck High Frequency Instrument (HFI) has observed the full sky at six frequencies (100, 143, 217, 353, 545, and 857 GHz) in intensity and at four frequencies in linear polarization (100, 143, 217, and 353 GHz). In order to obtain sky maps, the time-ordered information (TOI) containing the detector and pointing samples must be processed and the angular response must be assessed. The full mission TOI is included in the Planck 2015 release. This study describes the HFI TOI and beam processing for the 2015 release. HFI calibration and map making are described in a companion paper. The main pipeline has been modified since the last release (2013 nominal mission in intensity only), by including a correction for the nonlinearity of the warm readout and by improving the model of the bolometer time response. The beam processing is an essential tool that derives the angular response used in all the Planck science papers and we report an improvement in the effective beam window function uncertainty of more than a factor of 10 relative to the2013 release. Noise correlations introduced by pipeline filtering function are assessed using dedicated simulations. Finally, angular cross-power spectra using data sets that are decorrelated in time are immune to the main systematic effects.

  4. Option Pricing with Time-changed Lévy Processes

    DEFF Research Database (Denmark)

    Klingler, Sven; Kim, Young Shin; Rachev, Svetlozar T.

    2013-01-01

    In this article, we introduce two new six-parameter processes based on time-changing tempered stable distributions and develop an option pricing model based on these processes. This model provides a good fit to observed option prices. To demonstrate the advantages of the new processes, we conduct...

  5. Dry sample storage system for an analytical laboratory supporting plutonium processing

    International Nuclear Information System (INIS)

    Treibs, H.A.; Hartenstein, S.D.; Griebenow, B.L.; Wade, M.A.

    1990-01-01

    The Special Isotope Separation (SIS) plant is designed to provide removal of undesirable isotopes in fuel grade plutonium by the atomic vapor laser isotope separation (AVLIS) process. The AVLIS process involves evaporation of plutonium metal, and passage of an intense beam of light from a laser through the plutonium vapor. The laser beam consists of several discrete wavelengths, tuned to the precise wavelength required to ionize the undesired isotopes. These ions are attracted to charged plates, leaving the bulk of the plutonium vapor enriched in the desired isotopes to be collected on a cold plate. Major portions of the process consist of pyrochemical processes, including direct reduction of the plutonium oxide feed material with calcium metal, and aqueous processes for purification of plutonium in residues. The analytical laboratory for the plant is called the Material and Process Control Laboratory (MPCL), and provides for the analysis of solid and liquid process samples

  6. Stability of arsenic compounds in seafood samples during processing and storage by freezing

    DEFF Research Database (Denmark)

    Dahl, Lisbeth; Molin, Marianne; Amlund, Heidi

    2010-01-01

    was observed after processing or after storage by freezing. The content of tetramethylarsonium ion was generally low in all samples types, but increased significantly in all fried samples of both fresh and frozen seafood. Upon storage by freezing, the arsenobetaine content was reduced significantly, but only...

  7. A Formal Approach to Run-Time Evaluation of Real-Time Behaviour in Distributed Process Control Systems

    DEFF Research Database (Denmark)

    Kristensen, C.H.

    This thesis advocates a formal approach to run-time evaluation of real-time behaviour in distributed process sontrol systems, motivated by a growing interest in applying the increasingly popular formal methods in the application area of distributed process control systems. We propose to evaluate...... because the real-time aspects of distributed process control systems are considered to be among the hardest and most interesting to handle....

  8. Dimethyl adipimidate/Thin film Sample processing (DTS); A simple, low-cost, and versatile nucleic acid extraction assay for downstream analysis.

    Science.gov (United States)

    Shin, Yong; Lim, Swee Yin; Lee, Tae Yoon; Park, Mi Kyoung

    2015-09-15

    Sample processing, especially that involving nucleic acid extraction, is a prerequisite step for the isolation of high quantities of relatively pure DNA for downstream analyses in many life science and biomedical engineering studies. However, existing methods still have major problems, including labor-intensive time-consuming methods and high costs, as well as requirements for a centrifuge and the complex fabrication of filters and membranes. Here, we first report a versatile Dimethyl adipimidate/Thin film based Sample processing (DTS) procedure without the limitations of existing methods. This procedure is useful for the extraction of DNA from a variety of sources, including 6 eukaryotic cells, 6 bacteria cells, and 2 body fluids in a single step. Specifically, the DTS procedure does not require a centrifuge and has improved time efficiency (30 min), affordability, and sensitivity in downstream analysis. We validated the DTS procedure for the extraction of DNA from human body fluids, as well as confirmed that the quality and quantity of the extracted DNA were sufficient to allow robust detection of genetic and epigenetic biomarkers in downstream analysis.

  9. Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling

    Directory of Open Access Journals (Sweden)

    Saeed Mian Qaisar

    2009-01-01

    Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.

  10. Sampling system for in vivo ultrasound images

    DEFF Research Database (Denmark)

    Jensen, Jorgen Arendt; Mathorne, Jan

    1991-01-01

    Newly developed algorithms for processing medical ultrasound images use the high frequency sampled transducer signal. This paper describes demands imposed on a sampling system suitable for acquiring such data and gives details about a prototype constructed. It acquires full clinical images...... at a sampling frequency of 20 MHz with a resolution of 12 bits. The prototype can be used for real time image processing. An example of a clinical in vivo image is shown and various aspects of the data acquisition process are discussed....

  11. X-Ray Computed Tomography: The First Step in Mars Sample Return Processing

    Science.gov (United States)

    Welzenbach, L. C.; Fries, M. D.; Grady, M. M.; Greenwood, R. C.; McCubbin, F. M.; Zeigler, R. A.; Smith, C. L.; Steele, A.

    2017-01-01

    The Mars 2020 rover mission will collect and cache samples from the martian surface for possible retrieval and subsequent return to Earth. If the samples are returned, that mission would likely present an opportunity to analyze returned Mars samples within a geologic context on Mars. In addition, it may provide definitive information about the existence of past or present life on Mars. Mars sample return presents unique challenges for the collection, containment, transport, curation and processing of samples [1] Foremost in the processing of returned samples are the closely paired considerations of life detection and Planetary Protection. In order to achieve Mars Sample Return (MSR) science goals, reliable analyses will depend on overcoming some challenging signal/noise-related issues where sparse martian organic compounds must be reliably analyzed against the contamination background. While reliable analyses will depend on initial clean acquisition and robust documentation of all aspects of developing and managing the cache [2], there needs to be a reliable sample handling and analysis procedure that accounts for a variety of materials which may or may not contain evidence of past or present martian life. A recent report [3] suggests that a defined set of measurements should be made to effectively inform both science and Planetary Protection, when applied in the context of the two competing null hypotheses: 1) that there is no detectable life in the samples; or 2) that there is martian life in the samples. The defined measurements would include a phased approach that would be accepted by the community to preserve the bulk of the material, but provide unambiguous science data that can be used and interpreted by various disciplines. Fore-most is the concern that the initial steps would ensure the pristine nature of the samples. Preliminary, non-invasive techniques such as computed X-ray tomography (XCT) have been suggested as the first method to interrogate and

  12. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    Science.gov (United States)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  13. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  14. Preliminary time-phased TWRS process model results

    International Nuclear Information System (INIS)

    Orme, R.M.

    1995-01-01

    This report documents the first phase of efforts to model the retrieval and processing of Hanford tank waste within the constraints of an assumed tank farm configuration. This time-phased approach simulates a first try at a retrieval sequence, the batching of waste through retrieval facilities, the batching of retrieved waste through enhanced sludge washing, the batching of liquids through pretreatment and low-level waste (LLW) vitrification, and the batching of pretreated solids through high-level waste (HLW) vitrification. The results reflect the outcome of an assumed retrieval sequence that has not been tailored with respect to accepted measures of performance. The batch data, composition variability, and final waste volume projects in this report should be regarded as tentative. Nevertheless, the results provide interesting insights into time-phased processing of the tank waste. Inspection of the composition variability, for example, suggests modifications to the retrieval sequence that will further improve the uniformity of feed to the vitrification facilities. This model will be a valuable tool for evaluating suggested retrieval sequences and establishing a time-phased processing baseline. An official recommendation on tank retrieval sequence will be made in September, 1995

  15. Effect of tumbling time, injection rate and k-carrageenan addition on processing, textural and color characteristics of pork Biceps femoris muscle

    Directory of Open Access Journals (Sweden)

    Livia PATRAŞCU

    2013-08-01

    Full Text Available The effect of tumbling time (0-9 hours, injection rate (20-50% and k carrageenan addition (0.25 - 0.5% on quality characteristics of cooked pork Biceps femoris muscle have been studied. Properties of injected and tumbled meat samples were determined by measuring processing characteristics (tumbling yield, cooking yield and expressible moisture, color (L*, a*, b*, Hue angle and Chroma and texture (firmness, toughness, adhesiveness, work of adhesion and fracturability. Increasing tumbling time up to 9 h led to better hydration properties and increased the cooking yield for all samples, both with 0.25% and 0.5% of k-carrageenan addition. It also decreased the firmness and toughness of the evaluated samples. Biceps femoris samples containing a higher level of k-carrageenan were tenderer than those containing less polysaccharide. Neither injection rate nor tumbling time affected the color components of the analyzed samples.

  16. The U-tube sampling methodology and real-time analysis of geofluids

    International Nuclear Information System (INIS)

    Freifeld, Barry; Perkins, Ernie; Underschultz, James; Boreham, Chris

    2009-01-01

    The U-tube geochemical sampling methodology, an extension of the porous cup technique proposed by Wood (1973), provides minimally contaminated aliquots of multiphase fluids from deep reservoirs and allows for accurate determination of dissolved gas composition. The initial deployment of the U-tube during the Frio Brine Pilot CO 2 storage experiment, Liberty County, Texas, obtained representative samples of brine and supercritical CO 2 from a depth of 1.5 km. A quadrupole mass spectrometer provided real-time analysis of dissolved gas composition. Since the initial demonstration, the U-tube has been deployed for (1) sampling of fluids down gradient of the proposed Yucca Mountain High-Level Waste Repository, Armagosa Valley, Nevada (2) acquiring fluid samples beneath permafrost in Nunuvut Territory, Canada, and (3) at a CO 2 storage demonstration project within a depleted gas reservoir, Otway Basin, Victoria, Australia. The addition of in-line high-pressure pH and EC sensors allows for continuous monitoring of fluid during sample collection. Difficulties have arisen during U-tube sampling, such as blockage of sample lines from naturally occurring waxes or from freezing conditions; however, workarounds such as solvent flushing or heating have been used to address these problems. The U-tube methodology has proven to be robust, and with careful consideration of the constraints and limitations, can provide high quality geochemical samples.

  17. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E. P.; Poirier, M. R.

    2013-10-29

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  18. Defining And Characterizing Sample Representativeness For DWPF Melter Feed Samples

    International Nuclear Information System (INIS)

    Shine, E. P.; Poirier, M. R.

    2013-01-01

    Representative sampling is important throughout the Defense Waste Processing Facility (DWPF) process, and the demonstrated success of the DWPF process to achieve glass product quality over the past two decades is a direct result of the quality of information obtained from the process. The objective of this report was to present sampling methods that the Savannah River Site (SRS) used to qualify waste being dispositioned at the DWPF. The goal was to emphasize the methodology, not a list of outcomes from those studies. This methodology includes proven methods for taking representative samples, the use of controlled analytical methods, and data interpretation and reporting that considers the uncertainty of all error sources. Numerous sampling studies were conducted during the development of the DWPF process and still continue to be performed in order to evaluate options for process improvement. Study designs were based on use of statistical tools applicable to the determination of uncertainties associated with the data needs. Successful designs are apt to be repeated, so this report chose only to include prototypic case studies that typify the characteristics of frequently used designs. Case studies have been presented for studying in-tank homogeneity, evaluating the suitability of sampler systems, determining factors that affect mixing and sampling, comparing the final waste glass product chemical composition and durability to that of the glass pour stream sample and other samples from process vessels, and assessing the uniformity of the chemical composition in the waste glass product. Many of these studies efficiently addressed more than one of these areas of concern associated with demonstrating sample representativeness and provide examples of statistical tools in use for DWPF. The time when many of these designs were implemented was in an age when the sampling ideas of Pierre Gy were not as widespread as they are today. Nonetheless, the engineers and

  19. Real-time SHVC software decoding with multi-threaded parallel processing

    Science.gov (United States)

    Gudumasu, Srinivas; He, Yuwen; Ye, Yan; He, Yong; Ryu, Eun-Seok; Dong, Jie; Xiu, Xiaoyu

    2014-09-01

    This paper proposes a parallel decoding framework for scalable HEVC (SHVC). Various optimization technologies are implemented on the basis of SHVC reference software SHM-2.0 to achieve real-time decoding speed for the two layer spatial scalability configuration. SHVC decoder complexity is analyzed with profiling information. The decoding process at each layer and the up-sampling process are designed in parallel and scheduled by a high level application task manager. Within each layer, multi-threaded decoding is applied to accelerate the layer decoding speed. Entropy decoding, reconstruction, and in-loop processing are pipeline designed with multiple threads based on groups of coding tree units (CTU). A group of CTUs is treated as a processing unit in each pipeline stage to achieve a better trade-off between parallelism and synchronization. Motion compensation, inverse quantization, and inverse transform modules are further optimized with SSE4 SIMD instructions. Simulations on a desktop with an Intel i7 processor 2600 running at 3.4 GHz show that the parallel SHVC software decoder is able to decode 1080p spatial 2x at up to 60 fps (frames per second) and 1080p spatial 1.5x at up to 50 fps for those bitstreams generated with SHVC common test conditions in the JCT-VC standardization group. The decoding performance at various bitrates with different optimization technologies and different numbers of threads are compared in terms of decoding speed and resource usage, including processor and memory.

  20. Simulating the complex output of rainfall and hydrological processes using the information contained in large data sets: the Direct Sampling approach.

    Science.gov (United States)

    Oriani, Fabio

    2017-04-01

    The unpredictable nature of rainfall makes its estimation as much difficult as it is essential to hydrological applications. Stochastic simulation is often considered a convenient approach to asses the uncertainty of rainfall processes, but preserving their irregular behavior and variability at multiple scales is a challenge even for the most advanced techniques. In this presentation, an overview on the Direct Sampling technique [1] and its recent application to rainfall and hydrological data simulation [2, 3] is given. The algorithm, having its roots in multiple-point statistics, makes use of a training data set to simulate the outcome of a process without inferring any explicit probability measure: the data are simulated in time or space by sampling the training data set where a sufficiently similar group of neighbor data exists. This approach allows preserving complex statistical dependencies at different scales with a good approximation, while reducing the parameterization to the minimum. The straights and weaknesses of the Direct Sampling approach are shown through a series of applications to rainfall and hydrological data: from time-series simulation to spatial rainfall fields conditioned by elevation or a climate scenario. In the era of vast databases, is this data-driven approach a valid alternative to parametric simulation techniques? [1] Mariethoz G., Renard P., and Straubhaar J. (2010), The Direct Sampling method to perform multiple-point geostatistical simulations, Water. Rerous. Res., 46(11), http://dx.doi.org/10.1029/2008WR007621 [2] Oriani F., Straubhaar J., Renard P., and Mariethoz G. (2014), Simulation of rainfall time series from different climatic regions using the direct sampling technique, Hydrol. Earth Syst. Sci., 18, 3015-3031, http://dx.doi.org/10.5194/hess-18-3015-2014 [3] Oriani F., Borghi A., Straubhaar J., Mariethoz G., Renard P. (2016), Missing data simulation inside flow rate time-series using multiple-point statistics, Environ. Model

  1. InSAR Deformation Time Series Processed On-Demand in the Cloud

    Science.gov (United States)

    Horn, W. B.; Weeden, R.; Dimarchi, H.; Arko, S. A.; Hogenson, K.

    2017-12-01

    During this past year, ASF has developed a cloud-based on-demand processing system known as HyP3 (http://hyp3.asf.alaska.edu/), the Hybrid Pluggable Processing Pipeline, for Synthetic Aperture Radar (SAR) data. The system makes it easy for a user who doesn't have the time or inclination to install and use complex SAR processing software to leverage SAR data in their research or operations. One such processing algorithm is generation of a deformation time series product, which is a series of images representing ground displacements over time, which can be computed using a time series of interferometric SAR (InSAR) products. The set of software tools necessary to generate this useful product are difficult to install, configure, and use. Moreover, for a long time series with many images, the processing of just the interferograms can take days. Principally built by three undergraduate students at the ASF DAAC, the deformation time series processing relies the new Amazon Batch service, which enables processing of jobs with complex interconnected dependencies in a straightforward and efficient manner. In the case of generating a deformation time series product from a stack of single-look complex SAR images, the system uses Batch to serialize the up-front processing, interferogram generation, optional tropospheric correction, and deformation time series generation. The most time consuming portion is the interferogram generation, because even for a fairly small stack of images many interferograms need to be processed. By using AWS Batch, the interferograms are all generated in parallel; the entire process completes in hours rather than days. Additionally, the individual interferograms are saved in Amazon's cloud storage, so that when new data is acquired in the stack, an updated time series product can be generated with minimal addiitonal processing. This presentation will focus on the development techniques and enabling technologies that were used in developing the time

  2. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    International Nuclear Information System (INIS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-01-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to −1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management. (paper)

  3. Detecting spatial patterns of rivermouth processes using a geostatistical framework for near-real-time analysis

    Science.gov (United States)

    Xu, Wenzhao; Collingsworth, Paris D.; Bailey, Barbara; Carlson Mazur, Martha L.; Schaeffer, Jeff; Minsker, Barbara

    2017-01-01

    This paper proposes a geospatial analysis framework and software to interpret water-quality sampling data from towed undulating vehicles in near-real time. The framework includes data quality assurance and quality control processes, automated kriging interpolation along undulating paths, and local hotspot and cluster analyses. These methods are implemented in an interactive Web application developed using the Shiny package in the R programming environment to support near-real time analysis along with 2- and 3-D visualizations. The approach is demonstrated using historical sampling data from an undulating vehicle deployed at three rivermouth sites in Lake Michigan during 2011. The normalized root-mean-square error (NRMSE) of the interpolation averages approximately 10% in 3-fold cross validation. The results show that the framework can be used to track river plume dynamics and provide insights on mixing, which could be related to wind and seiche events.

  4. Microfluidic devices for sample clean-up and screening of biological samples

    NARCIS (Netherlands)

    Tetala, K.K.R.

    2009-01-01

    Analytical chemistry plays an important role in the separation and identification of analytes from raw samples (e.g. plant extracts, blood), but the whole analytical process is tedious, difficult to automate and time consuming. To overcome these drawbacks, the concept of μTAS (miniaturized total

  5. Malaria diagnosis from pooled blood samples: comparative analysis of real-time PCR, nested PCR and immunoassay as a platform for the molecular and serological diagnosis of malaria on a large-scale

    Directory of Open Access Journals (Sweden)

    Giselle FMC Lima

    2011-09-01

    Full Text Available Malaria diagnoses has traditionally been made using thick blood smears, but more sensitive and faster techniques are required to process large numbers of samples in clinical and epidemiological studies and in blood donor screening. Here, we evaluated molecular and serological tools to build a screening platform for pooled samples aimed at reducing both the time and the cost of these diagnoses. Positive and negative samples were analysed in individual and pooled experiments using real-time polymerase chain reaction (PCR, nested PCR and an immunochromatographic test. For the individual tests, 46/49 samples were positive by real-time PCR, 46/49 were positive by nested PCR and 32/46 were positive by immunochromatographic test. For the assays performed using pooled samples, 13/15 samples were positive by real-time PCR and nested PCR and 11/15 were positive by immunochromatographic test. These molecular methods demonstrated sensitivity and specificity for both the individual and pooled samples. Due to the advantages of the real-time PCR, such as the fast processing and the closed system, this method should be indicated as the first choice for use in large-scale diagnosis and the nested PCR should be used for species differentiation. However, additional field isolates should be tested to confirm the results achieved using cultured parasites and the serological test should only be adopted as a complementary method for malaria diagnosis.

  6. Data Validation Package - April and July 2015 Groundwater and Surface Water Sampling at the Gunnison, Colorado, Processing Site

    Energy Technology Data Exchange (ETDEWEB)

    Linard, Joshua [Dept. of Energy (DOE), Washington, DC (United States). Office of Legacy Management; Campbell, Sam [Navarro Research and Engineering, Inc., Oak Ridge, TN (United States)

    2016-02-01

    This event included annual sampling of groundwater and surface water locations at the Gunnison, Colorado, Processing Site. Sampling and analyses were conducted as specified in Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites. Samples were collected from 28 monitoring wells, three domestic wells, and six surface locations in April at the processing site as specified in the 2010 Ground Water Compliance Action Plan for the Gunnison, Colorado, Processing Site. Domestic wells 0476 and 0477 were sampled in July because the homes were unoccupied in April, and the wells were not in use. Duplicate samples were collected from locations 0113, 0248, and 0477. One equipment blank was collected during this sampling event. Water levels were measured at all monitoring wells that were sampled. No issues were identified during the data validation process that requires additional action or follow-up.

  7. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  8. Quantification of Parvovirus B19 DNA Using COBAS AmpliPrep Automated Sample Preparation and LightCycler Real-Time PCR

    Science.gov (United States)

    Schorling, Stefan; Schalasta, Gunnar; Enders, Gisela; Zauke, Michael

    2004-01-01

    The COBAS AmpliPrep instrument (Roche Diagnostics GmbH, D-68305 Mannheim, Germany) automates the entire sample preparation process of nucleic acid isolation from serum or plasma for polymerase chain reaction analysis. We report the analytical performance of the LightCycler Parvovirus B19 Quantification Kit (Roche Diagnostics) using nucleic acids isolated with the COBAS AmpliPrep instrument. Nucleic acids were extracted using the Total Nucleic Acid Isolation Kit (Roche Diagnostics) and amplified with the LightCycler Parvovirus B19 Quantification Kit. The kit combination processes 72 samples per 8-hour shift. The lower detection limit is 234 IU/ml at a 95% hit-rate, linear range approximately 104-1010 IU/ml, and overall precision 16 to 40%. Relative sensitivity and specificity in routine samples from pregnant women are 100% and 93%, respectively. Identification of a persistent parvovirus B19-infected individual by the polymerase chain reaction among 51 anti-parvovirus B19 IgM-negative samples underlines the importance of additional nucleic acid testing in pregnancy and its superiority to serology in identifying the risk of parvovirus B19 transmission via blood or blood products. Combination of the Total Nucleic Acid Isolation Kit on the COBAS AmpliPrep instrument with the LightCycler Parvovirus B19 Quantification Kit provides a reliable and time-saving tool for sensitive and accurate detection of parvovirus B19 DNA. PMID:14736825

  9. Wavelet data processing of micro-Raman spectra of biological samples

    Science.gov (United States)

    Camerlingo, C.; Zenone, F.; Gaeta, G. M.; Riccio, R.; Lepore, M.

    2006-02-01

    A wavelet multi-component decomposition algorithm is proposed for processing data from micro-Raman spectroscopy (μ-RS) of biological tissue. The μ-RS has been recently recognized as a promising tool for the biopsy test and in vivo diagnosis of degenerative human tissue pathologies, due to the high chemical and structural information contents of this spectroscopic technique. However, measurements of biological tissues are usually hampered by typically low-level signals and by the presence of noise and background components caused by light diffusion or fluorescence processes. In order to overcome these problems, a numerical method based on discrete wavelet transform is used for the analysis of data from μ-RS measurements performed in vitro on animal (pig and chicken) tissue samples and, in a preliminary form, on human skin and oral tissue biopsy from normal subjects. Visible light μ-RS was performed using a He-Ne laser and a monochromator with a liquid nitrogen cooled charge coupled device equipped with a grating of 1800 grooves mm-1. The validity of the proposed data procedure has been tested on the well-characterized Raman spectra of reference acetylsalicylic acid samples.

  10. Air exposure and sample storage time influence on hydrogen release from tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Moshkunov, K.A., E-mail: moshkunov@gmail.co [National Research Nuclear University ' MEPhI' , Kashirskoe sh. 31, 115409 Moscow (Russian Federation); Schmid, K.; Mayer, M. [Max-Planck-Institut fuer Plasmaphysik, EURATOM Association, Boltzmannstrasse 2, D-85748 Garching (Germany); Kurnaev, V.A.; Gasparyan, Yu.M. [National Research Nuclear University ' MEPhI' , Kashirskoe sh. 31, 115409 Moscow (Russian Federation)

    2010-09-30

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D{sub 2}O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of {approx}300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  11. Air exposure and sample storage time influence on hydrogen release from tungsten

    International Nuclear Information System (INIS)

    Moshkunov, K.A.; Schmid, K.; Mayer, M.; Kurnaev, V.A.; Gasparyan, Yu.M.

    2010-01-01

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D 2 O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of ∼300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  12. Air exposure and sample storage time influence on hydrogen release from tungsten

    Science.gov (United States)

    Moshkunov, K. A.; Schmid, K.; Mayer, M.; Kurnaev, V. A.; Gasparyan, Yu. M.

    2010-09-01

    In investigations of hydrogen retention in first wall components the influence of the conditions of the implanted target storage prior to analysis and the storage time is often neglected. Therefore we have performed a dedicated set of experiments. The release of hydrogen from samples exposed to ambient air after irradiation was compared to samples kept in vacuum. For air exposed samples significant amounts of HDO and D 2O are detected during TDS. Additional experiments have shown that heavy water is formed by recombination of releasing D and H atoms with O on the W surface. This water formation can alter hydrogen retention results significantly, in particular - for low retention cases. In addition to the influence of ambient air exposure also the influence of storage time in vacuum was investigated. After implantation at 300 K the samples were stored in vacuum for up to 1 week during which the retained amount decreased significantly. The subsequently measured TDS spectra showed that D was lost from both the high and low energy peaks during storage at ambient temperature of ˜300 K. An attempt to simulate this release from both peaks during room temperature storage by TMAP 7 calculations showed that this effect cannot be explained by conventional diffusion/trapping models.

  13. Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems

    Science.gov (United States)

    Fernandez-Llatas, Carlos; Lizondo, Aroa; Monton, Eduardo; Benedi, Jose-Miguel; Traver, Vicente

    2015-01-01

    The definition of efficient and accurate health processes in hospitals is crucial for ensuring an adequate quality of service. Knowing and improving the behavior of the surgical processes in a hospital can improve the number of patients that can be operated on using the same resources. However, the measure of this process is usually made in an obtrusive way, forcing nurses to get information and time data, affecting the proper process and generating inaccurate data due to human errors during the stressful journey of health staff in the operating theater. The use of indoor location systems can take time information about the process in an unobtrusive way, freeing nurses, allowing them to engage in purely welfare work. However, it is necessary to present these data in a understandable way for health professionals, who cannot deal with large amounts of historical localization log data. The use of process mining techniques can deal with this problem, offering an easily understandable view of the process. In this paper, we present a tool and a process mining-based methodology that, using indoor location systems, enables health staff not only to represent the process, but to know precise information about the deployment of the process in an unobtrusive and transparent way. We have successfully tested this tool in a real surgical area with 3613 patients during February, March and April of 2015. PMID:26633395

  14. Process Mining Methodology for Health Process Tracking Using Real-Time Indoor Location Systems.

    Science.gov (United States)

    Fernandez-Llatas, Carlos; Lizondo, Aroa; Monton, Eduardo; Benedi, Jose-Miguel; Traver, Vicente

    2015-11-30

    The definition of efficient and accurate health processes in hospitals is crucial for ensuring an adequate quality of service. Knowing and improving the behavior of the surgical processes in a hospital can improve the number of patients that can be operated on using the same resources. However, the measure of this process is usually made in an obtrusive way, forcing nurses to get information and time data, affecting the proper process and generating inaccurate data due to human errors during the stressful journey of health staff in the operating theater. The use of indoor location systems can take time information about the process in an unobtrusive way, freeing nurses, allowing them to engage in purely welfare work. However, it is necessary to present these data in a understandable way for health professionals, who cannot deal with large amounts of historical localization log data. The use of process mining techniques can deal with this problem, offering an easily understandable view of the process. In this paper, we present a tool and a process mining-based methodology that, using indoor location systems, enables health staff not only to represent the process, but to know precise information about the deployment of the process in an unobtrusive and transparent way. We have successfully tested this tool in a real surgical area with 3613 patients during February, March and April of 2015.

  15. Automating data analysis for two-dimensional gas chromatography/time-of-flight mass spectrometry non-targeted analysis of comparative samples.

    Science.gov (United States)

    Titaley, Ivan A; Ogba, O Maduka; Chibwe, Leah; Hoh, Eunha; Cheong, Paul H-Y; Simonich, Staci L Massey

    2018-03-16

    Non-targeted analysis of environmental samples, using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC × GC/ToF-MS), poses significant data analysis challenges due to the large number of possible analytes. Non-targeted data analysis of complex mixtures is prone to human bias and is laborious, particularly for comparative environmental samples such as contaminated soil pre- and post-bioremediation. To address this research bottleneck, we developed OCTpy, a Python™ script that acts as a data reduction filter to automate GC × GC/ToF-MS data analysis from LECO ® ChromaTOF ® software and facilitates selection of analytes of interest based on peak area comparison between comparative samples. We used data from polycyclic aromatic hydrocarbon (PAH) contaminated soil, pre- and post-bioremediation, to assess the effectiveness of OCTpy in facilitating the selection of analytes that have formed or degraded following treatment. Using datasets from the soil extracts pre- and post-bioremediation, OCTpy selected, on average, 18% of the initial suggested analytes generated by the LECO ® ChromaTOF ® software Statistical Compare feature. Based on this list, 63-100% of the candidate analytes identified by a highly trained individual were also selected by OCTpy. This process was accomplished in several minutes per sample, whereas manual data analysis took several hours per sample. OCTpy automates the analysis of complex mixtures of comparative samples, reduces the potential for human error during heavy data handling and decreases data analysis time by at least tenfold. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Chromosomal radiosensitivity of human leucocytes in relation to sampling time

    International Nuclear Information System (INIS)

    Buul, P.P.W. van; Natarajan, A.T.

    1980-01-01

    Frequencies of chromosomal aberrations after irradiation with X-rays of peripheral blood lymphocytes in vitro were determined at different times after initiation of cultures. In each culture, the kinetics of cell multiplication was followed by using BrdU labelling and differential staining of chromosomes. The results indicate that the mixing up of first and second cell cycle cells at later sampling times cannot explain the observed variation in the frequencies of chromosomal aberrations but that donor-to-donor variation is a predominant factor influencing yields of aberrations. The condition of a donor seems to be most important because repeats on the same donor also showed marked variability. (orig.)

  17. Carotene location in processed food samples measured by cryo In-SEM Raman.

    Science.gov (United States)

    Lopez-Sanchez, Patricia; Schumm, Stephan; Pudney, Paul D A; Hazekamp, Johan

    2011-09-21

    Cryo In-SEM Raman has been used for the first time to localise carotene compounds in a food matrix. Raman spectra of lycopene and β-carotene have been obtained from sampling oil droplets and plant cell structures visualised with cryo-SEM in tomato and carrot based emulsions containing 5% oil. It was possible to identify the carotenoids in both the oil droplets and the cell walls. Furthermore our results gave some indication that the carotenoids were in the non-crystalline state. It has been suggested that a higher amount of carotenes solubilised into the oil phase of the food matrix would lead to a higher bioaccessibility, thus understanding the effect of processing conditions on micronutrients distribution in a food matrix might help the design of plant based food products with a better nutritional quality. This shows improved structural characterisation of the cryo-SEM with the molecular sensitivity of Raman spectroscopy as a promising approach for complex biological problems.

  18. Selection of internal control genes for quantitative real-time RT-PCR studies during tomato development process

    Directory of Open Access Journals (Sweden)

    Borges-Pérez Andrés

    2008-12-01

    Full Text Available Abstract Background The elucidation of gene expression patterns leads to a better understanding of biological processes. Real-time quantitative RT-PCR has become the standard method for in-depth studies of gene expression. A biologically meaningful reporting of target mRNA quantities requires accurate and reliable normalization in order to identify real gene-specific variation. The purpose of normalization is to control several variables such as different amounts and quality of starting material, variable enzymatic efficiencies of retrotranscription from RNA to cDNA, or differences between tissues or cells in overall transcriptional activity. The validity of a housekeeping gene as endogenous control relies on the stability of its expression level across the sample panel being analysed. In the present report we describe the first systematic evaluation of potential internal controls during tomato development process to identify which are the most reliable for transcript quantification by real-time RT-PCR. Results In this study, we assess the expression stability of 7 traditional and 4 novel housekeeping genes in a set of 27 samples representing different tissues and organs of tomato plants at different developmental stages. First, we designed, tested and optimized amplification primers for real-time RT-PCR. Then, expression data from each candidate gene were evaluated with three complementary approaches based on different statistical procedures. Our analysis suggests that SGN-U314153 (CAC, SGN-U321250 (TIP41, SGN-U346908 ("Expressed" and SGN-U316474 (SAND genes provide superior transcript normalization in tomato development studies. We recommend different combinations of these exceptionally stable housekeeping genes for suited normalization of different developmental series, including the complete tomato development process. Conclusion This work constitutes the first effort for the selection of optimal endogenous controls for quantitative real-time

  19. Precise timing correlation in telemetry recording and processing systems

    Science.gov (United States)

    Pickett, R. B.; Matthews, F. L.

    1973-01-01

    Independent PCM telemetry data signals received from missiles must be correlated to within + or - 100 microseconds for comparison with radar data. Tests have been conducted to determine RF antenna receiving system delays; delays associated with wideband analog tape recorders used in the recording, dubbing and repdocuing processes; and uncertainties associated with computer processed time tag data. Several methods used in the recording of timing are evaluated. Through the application of a special time tagging technique, the cumulative timing bias from all sources is determined and the bias removed from final data. Conclusions show that relative time differences in receiving, recording, playback and processing of two telemetry links can be accomplished with a + or - 4 microseconds accuracy. In addition, the absolute time tag error (with respect to UTC) can be reduced to less than 15 microseconds. This investigation is believed to be the first attempt to identify the individual error contributions within the telemetry system and to describe the methods of error reduction within the telemetry system and to describe the methods of error reduction and correction.

  20. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    Science.gov (United States)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  1. Influence of salt content and processing time on sensory characteristics of cooked "lacón".

    Science.gov (United States)

    Purriños, Laura; Bermúdez, Roberto; Temperán, Sara; Franco, Daniel; Carballo, Javier; Lorenzo, José M

    2011-04-01

    The influence of salt content and processing time on the sensory properties of cooked "lacón" were determined. "Lacón" is a traditional dry-cured and ripened meat product made in the north-west of Spain from the fore leg of the pig, following a similar process to that of dry-cured ham. Six batches of "lacón" were salted with different amounts of salt (LS (3 days of salting), MS (4 days of salting) and HS (5 days of salting)) and ripened during two times (56 and 84 days of dry-ripening). Cured odour in all batches studied, red colour and rancid odour in MS and HS batches, flavour intensity in MS batch and fat yellowness, rancid flavour and hardness in the HS batch were significantly different with respect to the time of processing. Appearance, odour, flavour and texture were not significantly affected by the salt content (P>0.05). However, the saltiness score showed significant differences with respect to the salt levels in all studied batches (56 and 84 days of process). The principal component analysis showed that physicochemical traits were the most important ones concerning the quality of dry-cured "lacón" and offered a good separation of the mean samples according to the dry ripening days and salt level. © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.

  2. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....

  3. Processing a Complex Architectural Sampling with Meshlab: the Case of Piazza della Signoria

    Science.gov (United States)

    Callieri, M.; Cignoni, P.; Dellepiane, M.; Ranzuglia, G.; Scopigno, R.

    2011-09-01

    The paper presents a recent 3D scanning project performed with long range scanning technology showing how a complex sampled dataset can be processed with the features available in MeshLab, an open source tool. MeshLab is an open source mesh processing system. It is a portable and extensible system aimed to help the processing of the typical not-so-small unstructured models that arise in 3D scanning, providing a set of tools for editing, cleaning, processing, inspecting, rendering and converting meshes. The MeshLab system started in late 2005 as a part of a university course, and considerably evolved since then thanks to the effort of the Visual Computing Lab and of the support of several funded EC projects. MeshLab gained so far an excellent visibility and distribution, with several thousands downloads every month, and a continuous evolution. The aim of this scanning campaign was to sample the façades of the buildings located in Piazza della Signoria (Florence, Italy). This digital 3D model was required, in the framework of a Regional Project, as a basic background model to present a complex set of images using a virtual navigation metaphor (following the PhotoSynth approach). Processing of complex dataset, such as the ones produced by long range scanners, often requires specialized, difficult to use and costly software packages. We show in the paper how it is possible to process this kind of data inside an open source tool, thanks to the many new features recently introduced in MeshLab for the management of large sets of sampled point.

  4. A mixed signal ECG processing platform with an adaptive sampling ADC for portable monitoring applications.

    Science.gov (United States)

    Kim, Hyejung; Van Hoof, Chris; Yazicioglu, Refet Firat

    2011-01-01

    This paper describes a mixed-signal ECG processing platform with an 12-bit ADC architecture that can adapt its sampling rate according to the input signals rate of change. This enables the sampling of ECG signals with significantly reduced data rate without loss of information. The presented adaptive sampling scheme reduces the ADC power consumption, enables the processing of ECG signals with lower power consumption, and reduces the power consumption of the radio while streaming the ECG signals. The test results show that running a CWT-based R peak detection algorithm using the adaptively sampled ECG signals consumes only 45.6 μW and it leads to 36% less overall system power consumption.

  5. Standardised Resting Time Prior to Blood Sampling and Diurnal Variation Associated with Risk of Patient Misclassification

    DEFF Research Database (Denmark)

    Bøgh Andersen, Ida; Brasen, Claus L.; Christensen, Henry

    2015-01-01

    .9×10-7) and sodium (p = 8.7×10-16). Only TSH and albumin were clinically significantly influenced by diurnal variation. Resting time had no clinically significant effect. CONCLUSIONS: We found no need for resting 15 minutes prior to blood sampling. However, diurnal variation was found to have a significant......BACKGROUND: According to current recommendations, blood samples should be taken in the morning after 15 minutes' resting time. Some components exhibit diurnal variation and in response to pressures to expand opening hours and reduce waiting time, the aims of this study were to investigate...... the impact of resting time prior to blood sampling and diurnal variation on biochemical components, including albumin, thyrotropin (TSH), total calcium and sodium in plasma. METHODS: All patients referred to an outpatient clinic for blood sampling were included in the period Nov 2011 until June 2014 (opening...

  6. Time delay estimation in a reverberant environment by low rate sampling of impulsive acoustic sources

    KAUST Repository

    Omer, Muhammad

    2012-07-01

    This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity and decrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimental results. © 2012 IEEE.

  7. Testing the causality of Hawkes processes with time reversal

    Science.gov (United States)

    Cordi, Marcus; Challet, Damien; Muni Toke, Ioane

    2018-03-01

    We show that univariate and symmetric multivariate Hawkes processes are only weakly causal: the true log-likelihoods of real and reversed event time vectors are almost equal, thus parameter estimation via maximum likelihood only weakly depends on the direction of the arrow of time. In ideal (synthetic) conditions, tests of goodness of parametric fit unambiguously reject backward event times, which implies that inferring kernels from time-symmetric quantities, such as the autocovariance of the event rate, only rarely produce statistically significant fits. Finally, we find that fitting financial data with many-parameter kernels may yield significant fits for both arrows of time for the same event time vector, sometimes favouring the backward time direction. This goes to show that a significant fit of Hawkes processes to real data with flexible kernels does not imply a definite arrow of time unless one tests it.

  8. Real-time monitoring of clinical processes using complex event processing and transition systems.

    Science.gov (United States)

    Meinecke, Sebastian

    2014-01-01

    Dependencies between tasks in clinical processes are often complex and error-prone. Our aim is to describe a new approach for the automatic derivation of clinical events identified via the behaviour of IT systems using Complex Event Processing. Furthermore we map these events on transition systems to monitor crucial clinical processes in real-time for preventing and detecting erroneous situations.

  9. Environmental sampling accounting at the Savannah River Plant

    International Nuclear Information System (INIS)

    Zeigler, C.C.; Wood, M.B.

    1978-06-01

    At the Savannah River Plant Environmental Monitoring Laboratories, a computer-based systematic accounting method was developed to ensure that all scheduled samples are collected, processed through the laboratory, and counted without delay. The system employs an IBM 360/195 computer with a magnetic tape master file, an on-line disk file, and cathode ray tube (CRT) terminals. Scheduling and accounting are accomplished by using computer-generated schedules, collection labels, and output/input cards. For each scheduled sample and analysis, a printed card is issued for collection, laboratory analysis, and counting. The cards also contain information needed by personnel performing the jobs, such as sample location, aliquot to be processed, or procedure number. Manual entries are made on the cards when each step in the process is completed. Additional pertinent data are also manually entered on the cards; e.g., entries are made explaining why a sample is not collected, the sample aliquot in the event a nonstandard aliquot is processed, field measurement results, and analytical results. These manually entered data are keypunched and read into the computer files. The computer files are audited daily, and summaries of samples not processed in pre-established normal time intervals are issued. The progress of sample analyses can also be readily determined at any time using the CRT terminal. Historic data are also maintained on magnetic tape and workload summaries are issued showing the number of samples and number of determinations per month

  10.   Ultrasonic monitoring of fish thawing process optimal time of thawing and effect of freezing/thawing.

    Science.gov (United States)

    El Kadi, Youssef Ait; Moudden, Ali; Faiz, Bouazza; Maze, Gerard; Decultot, Dominique

    2013-01-01

    Fish quality is traditionally controlled by chemical and microbiological analysis. The non-destructive control presents an enormous professional interest thanks to the technical contribution and precision of the analysis to which it leads. This paper presents the results obtained from a characterisation of fish thaw-ing process by the ultrasonic technique, with monitoring thermal processing from frozen to defrosted states. The study was carried out on fish type red drum and salmon cut into fillets of 15 mm thickness. After being frozen at -20°C, the sample is enclosed in a plexiglas vessel with parallel walls at the ambient temperature 30°C and excited in perpendicular incidence at 0.5 MHz by an ultrasonic pulser-receiver Sofranel 5052PR. the technique of measurement consists to study the signals reflected by fish during its thawing, the specific techniques of signal processing are implemented to deduce informations characterizing the state of fish and its thawing process by examining the evolution of the position echoes reflected by the sample and the viscoelastic parameters of fish during its thawing. The obtained results show a relationship between the thermal state of fish and its acoustic properties, which allowed to deduce the optimal time of the first thawing in order to restrict the growth of microbial flora. For salmon, the results show a decrease of 36% of the time of the second thawing and an increase of 10.88% of the phase velocity, with a decrease of 65.5% of the peak-to-peak voltage of the signal reflected, thus a decrease of the acoustic impedance. This study shows an optimal time and an evolution rate of thawing specific to each type offish and a correlation between the acoustic behavior of fish and its thermal state which approves that this technique of ultrasonic monitoring can substitute the control using the destructive chemical analysis in order to monitor the thawing process and to know whether a fish has suffered an accidental thawing.

  11. The relationship between time perspective and self-regulatory processes, abilities and outcomes: a protocol for a meta-analytical review.

    Science.gov (United States)

    Baird, Harriet M; Webb, Thomas L; Martin, Jilly; Sirois, Fuschia M

    2017-07-05

    Both theoretical and empirical evidence suggests that time perspective is likely to influence self-regulatory processes and outcomes. Despite the theoretical and practical significance of such relations, the relationship between time perspective and self-regulatory processes and outcomes across different measures, samples and life domains, including health, has yet to be explored. The proposed review will develop a taxonomy for classifying measures according to the self-regulatory process, ability or outcome that they are likely to reflect. Electronic scientific databases will be searched, along with relevant conference abstract booklets and citation lists. Additionally, a call for unpublished data will be submitted to relevant bodies. To be eligible for inclusion, studies must include a measure of time perspective and a measure of at least one self-regulatory process, ability and/ or outcome. Eligibility will not be restricted by publication date, language, type of sample or setting. The bivariate correlations will be extracted (or calculated) and submitted to a random-effects meta-analysis. The sample-weighted average effect size, heterogeneity, risk of bias and publication bias will be calculated, and the effects of categorical and continuous moderator variables on the effect sizes will be determined. The proposed meta-analysis will synthesise previously conducted research; thus, ethical approval is not required. The findings will be submitted for publication in an international peer-reviewed journal and reported as part of the first author’s PhD thesis. The findings will also be disseminated to the research community and, where appropriate, to other interested parties through presentations at relevant academic and non-academic conferences. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion

  13. Time and activity sequence prediction of business process instances

    DEFF Research Database (Denmark)

    Polato, Mirko; Sperduti, Alessandro; Burattin, Andrea

    2018-01-01

    The ability to know in advance the trend of running process instances, with respect to different features, such as the expected completion time, would allow business managers to timely counteract to undesired situations, in order to prevent losses. Therefore, the ability to accurately predict...... future features of running business process instances would be a very helpful aid when managing processes, especially under service level agreement constraints. However, making such accurate forecasts is not easy: many factors may influence the predicted features. Many approaches have been proposed...

  14. Injury and time studies of working processes in fishing

    DEFF Research Database (Denmark)

    Jensen, Olaf Chresten

    2006-01-01

    of the present study was to relate the length of the working time to the number of injuries for the speciWc working processes in Wshing. Time measurements were performed during participation in Wshing trips with four diVerent kinds of vessels. Risk index numbers for the speciWc working processes were calculated......Epidemiological studies of occupational injury document the incidence rates of the main structures as type of workplace and the work departments. The work processes within the departments represent an internal structure where the injury rates have not been given much attention before. The purpose...... by dividing the number of injuries within a 5-year period with the total sum of minutes used for each working process as measured during one Wshing trip for each type of Wshing. The highest risk index numbers were found for embarking and disembarking the vessel, which only takes a minimum of time...

  15. Real-time monitoring and chemical profiling of a cultivation process

    DEFF Research Database (Denmark)

    Mortensen, Peter P.; Bro, Rasmus

    2006-01-01

    they are known to reflect important properties of the fermentation process. Focus is also on important sampling issues-mainly structurally sub-optimal primary sampling methods affecting the representativity obtainable relative to the lot characteristics. Several different calibration approaches are investigated....... An enzyme marker profile as well as a tryptophan (protein marker) profile is identified. (c) 2006 Elsevier B.V All rights reserved....

  16. Automated processing of forensic casework samples using robotic workstations equipped with nondisposable tips: contamination prevention.

    Science.gov (United States)

    Frégeau, Chantal J; Lett, C Marc; Elliott, Jim; Yensen, Craig; Fourney, Ron M

    2008-05-01

    An automated process has been developed for the analysis of forensic casework samples using TECAN Genesis RSP 150/8 or Freedom EVO liquid handling workstations equipped exclusively with nondisposable tips. Robot tip cleaning routines have been incorporated strategically within the DNA extraction process as well as at the end of each session. Alternative options were examined for cleaning the tips and different strategies were employed to verify cross-contamination. A 2% sodium hypochlorite wash (1/5th dilution of the 10.8% commercial bleach stock) proved to be the best overall approach for preventing cross-contamination of samples processed using our automated protocol. The bleach wash steps do not adversely impact the short tandem repeat (STR) profiles developed from DNA extracted robotically and allow for major cost savings through the implementation of fixed tips. We have demonstrated that robotic workstations equipped with fixed pipette tips can be used with confidence with properly designed tip washing routines to process casework samples using an adapted magnetic bead extraction protocol.

  17. Data Validation Package November 2015 Groundwater and Surface Water Sampling at the Old and New Rifle, Colorado, Processing Sites February 2016

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Richard [USDOE Office of Legacy Management, Washington, DC (United States); Lemke, Peter [Navarro Research and Engineering, Inc., Oak Ridge, TN (United States)

    2016-02-01

    Water samples were collected from 36 locations at New Rifle and Old Rifle, Colorado, Processing Sites. Duplicate samples were collected from New Rifle locations 0659 and 0855, and Old Rifle location 0304. One equipment blank was collected after decontamination of non-dedicated equipment used to collect one surface water sample. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated). New Rifle Site Samples were collected at the New Rifle site from 16 monitoring wells and 7 surface locations in compliance with the December 2008 Groundwater Compliance Action Plan [GCAP] for the New Rifle, Colorado, Processing Site (LMS/RFN/S01920), with one exception: New Rifle location 0635 could not be sampled because it was inaccessible; a fence installed by the Colorado Department of Transportation prevents access to this location. DOE is currently negotiating access with the Colorado Department of Transportation. Analytes measured at the New Rifle site included contaminants of concern (COCs) (arsenic, molybdenum, nitrate + nitrite as nitrogen, selenium, uranium, and vanadium) ammonia as nitrogen, major cations, and major anions. Field measurements of total alkalinity, oxidation- reduction potential, pH, specific conductance, turbidity, and temperature were made at each location, and the water level was measured at each sampled well. A proposed alternate concentration limit (ACL) for vanadium of 50 milligrams per liter (mg/L), specific to the compliance (POC) wells (RFN-0217, -0659, -0664, and -0669) is included in the New Rifle GCAP. Vanadium concentrations in the POC wells were below the proposed ACL as shown in the time-concentration graphs in the Data Presentation section (Attachment 2). Time-concentration graphs from all other locations sampled are also included in Attachment 2. Sampling location RFN-0195 was misidentified for the June/August 2014

  18. Evaluation of SRAT Sampling Data in Support of a Six Sigma Yellow Belt Process Improvement Project

    International Nuclear Information System (INIS)

    Edwards, Thomas B.

    2005-01-01

    As part of the Six Sigma continuous improvement initiatives at the Defense Waste Processing Facility (DWPF), a Yellow Belt team was formed to evaluate the frequency and types of samples required for the Sludge Receipt and Adjustment Tank (SRAT) receipt in the DWPF. The team asked, via a technical task request, that the Statistical Consulting Section (SCS), in concert with the Immobilization Technology Section (ITS) (both groups within the Savannah River National Laboratory (SRNL)), conduct a statistical review of recent SRAT receipt results to determine if there is enough consistency in these measurements to allow for less frequent sampling. As part of this review process, key decisions made by DWPF Process Engineering that are based upon the SRAT sample measurements are outlined in this report. For a reduction in SRAT sampling to be viable, these decisions must not be overly sensitive to the additional variation that will be introduced as a result of such a reduction. Measurements from samples of SRAT receipt batches 314 through 323 were reviewed as part of this investigation into the frequency of SRAT sampling. The associated acid calculations for these batches were also studied as part of this effort. The results from this investigation showed no indication of a statistically significant relationship between the tank solids and the acid additions for these batches. One would expect that as the tank solids increase there would be a corresponding increase in acid requirements. There was, however, an indication that the predicted reduction/oxidation (REDOX) ratio (the ratio of Fe 2+ to the total Fe in the glass product) that was targeted by the acid calculations based on the SRAT receipt samples for these batches was on average 0.0253 larger than the predicted REDOX based upon Slurry Mix Evaporator (SME) measurements. This is a statistically significant difference (at the 5% significance level), and the study also suggested that the difference was due to

  19. Large sample hydrology in NZ: Spatial organisation in process diagnostics

    Science.gov (United States)

    McMillan, H. K.; Woods, R. A.; Clark, M. P.

    2013-12-01

    A key question in hydrology is how to predict the dominant runoff generation processes in any given catchment. This knowledge is vital for a range of applications in forecasting hydrological response and related processes such as nutrient and sediment transport. A step towards this goal is to map dominant processes in locations where data is available. In this presentation, we use data from 900 flow gauging stations and 680 rain gauges in New Zealand, to assess hydrological processes. These catchments range in character from rolling pasture, to alluvial plains, to temperate rainforest, to volcanic areas. By taking advantage of so many flow regimes, we harness the benefits of large-sample and comparative hydrology to study patterns and spatial organisation in runoff processes, and their relationship to physical catchment characteristics. The approach we use to assess hydrological processes is based on the concept of diagnostic signatures. Diagnostic signatures in hydrology are targeted analyses of measured data which allow us to investigate specific aspects of catchment response. We apply signatures which target the water balance, the flood response and the recession behaviour. We explore the organisation, similarity and diversity in hydrological processes across the New Zealand landscape, and how these patterns change with scale. We discuss our findings in the context of the strong hydro-climatic gradients in New Zealand, and consider the implications for hydrological model building on a national scale.

  20. Process mining techniques: an application to time management

    Science.gov (United States)

    Khowaja, Ali Raza

    2018-04-01

    In an environment people have to make sure that all of their work are completed within a given time in accordance with its quality. In order to achieve the real phenomenon of process mining one needs to understand all of these processes in a detailed manner. Personal Information and communication has always been a highlighting issue on internet but for now information and communication tools within factual life refers to their daily schedule, location analysis, environmental analysis and, more generally, social media applications support these systems which makes data available for data analysis generated through event logs, but also for process analysis which combines environmental and location analysis. Process mining can be used to exploit all these real live processes with the help of the event logs which are already available in those datasets through user censored data or may be user labeled data. These processes could be used to redesign a user's flow and understand all these processes in a bit more detailed manner. In order to increase the quality of each of the processes that we go through our daily lives is to give a closer look to each of the processes and after analyzing them, one should make changes to get better results. On the contrarily, we applied process mining techniques on seven different subjects combined in a single dataset collected from Korea. Above all, the following paper comments on the efficiency of processes in the event logs referring to time management's sphere of influence.

  1. Real-time synchronization of batch trajectories for on-line multivariate statistical process control using Dynamic Time Warping

    OpenAIRE

    González Martínez, Jose María; Ferrer Riquelme, Alberto José; Westerhuis, Johan A.

    2011-01-01

    This paper addresses the real-time monitoring of batch processes with multiple different local time trajectories of variables measured during the process run. For Unfold Principal Component Analysis (U-PCA)—or Unfold Partial Least Squares (U-PLS)-based on-line monitoring of batch processes, batch runs need to be synchronized, not only to have the same time length, but also such that key events happen at the same time. An adaptation from Kassidas et al.'s approach [1] will be introduced to ach...

  2. Sampling times influence the estimate of parameters in the Weibull dissolution model

    Czech Academy of Sciences Publication Activity Database

    Čupera, J.; Lánský, Petr; Šklubalová, Z.

    2015-01-01

    Roč. 78, Oct 12 (2015), s. 171-176 ISSN 0928-0987 Institutional support: RVO:67985823 Keywords : dissolution * Fisher information * rate constant * optimal sampling times Subject RIV: BA - General Mathematics Impact factor: 3.773, year: 2015

  3. Sequential specification of time-aware stream processing applications

    NARCIS (Netherlands)

    Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit

    Automatic parallelization of Nested Loop Programs (NLPs) is an attractive method to create embedded real-time stream processing applications for multi-core systems. However, the description and parallelization of applications with a time dependent functional behavior has not been considered in NLPs.

  4. Data Validation Package June 2016 Groundwater and Surface Water Sampling at the Old and New Rifle, Colorado, Processing Sites September 2016

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Richard [USDOE Office of Legacy Management (LM), Washington, DC (United States); Lemke, Peter [Navarro Research and Engineering, Inc., Oak Ridge, TN (United States)

    2016-10-17

    Sampling Period: June 14–17 and July 7, 2016. Water samples were collected from 36 locations at New Rifle and Old Rifle, Colorado, Disposal/Processing Sites. Planned monitoring locations are shown in Attachment 1, Sampling and Analysis Work Order. Duplicate samples were collected from New Rifle locations 0216 and 0855, and Old Rifle location 0655. One equipment blank was collected after decontamination of non-dedicated equipment used to collect one surface water sample. See Attachment 2, Trip Report for additional details. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated, http://energy.gov/lm/downloads/sampling-and- analysis-plan-us-department-energy-office-legacy-management-sites). New Rifle Site Samples were collected at the New Rifle site from 16 monitoring wells and 7 surface locations in compliance with the December 2008 Groundwater Compliance Action Plan [GCAP] for the New Rifle, Colorado, Processing Site (LMS/RFN/S01920). Monitoring well 0216 could not be sampled in June because it was surrounded by standing water due to the high river stage from spring runoff, it was later sampled in July. Monitoring well 0635 and surface location 0322 could not be sampled because access through the elk fence along Interstate 70 has not been completed at this time. Old Rifle Site Samples were collected at the Old Rifle site from eight monitoring wells and five surface locations in compliance with the December 2001 Ground Water Compliance Action Plan for the Old Rifle, Colorado, UMTRA Project Site (GJ0-2000-177-TAR).

  5. A real-time data acquisition and processing system for the analytical laboratory automation of a HTR spent fuel reprocessing facility

    International Nuclear Information System (INIS)

    Watzlawik, K.H.

    1979-12-01

    A real-time data acquisition and processing system for the analytical laboratory of an experimental HTR spent fuel reprocessing facility is presented. The on-line open-loop system combines in-line and off-line analytical measurement procedures including data acquisition and evaluation as well as analytical laboratory organisation under the control of a computer-supported laboratory automation system. In-line measurements are performed for density, volume and temperature in process tanks and registration of samples for off-line measurements. Off-line computer-coupled experiments are potentiometric titration, gas chromatography and X-ray fluorescence analysis. Organisational sections like sample registration, magazining, distribution and identification, multiple data assignment and especially calibrations of analytical devices are performed by the data processing system. (orig.) [de

  6. Event Processing and Variable Part of Sample Period Determining in Combined Systems Using GA

    Science.gov (United States)

    Strémy, Maximilián; Závacký, Pavol; Jedlička, Martin

    2011-01-01

    This article deals with combined dynamic systems and usage of modern techniques in dealing with these systems, focusing particularly on sampling period design, cyclic processing tasks and related processing algorithms in the combined event management systems using genetic algorithms.

  7. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  8. Sample pooling for real-time PCR detection and virulence determination of the footrot pathogen Dichelobacter nodosus.

    Science.gov (United States)

    Frosth, Sara; König, Ulrika; Nyman, Ann-Kristin; Aspán, Anna

    2017-09-01

    Dichelobacter nodosus is the principal cause of ovine footrot and strain virulence is an important factor in disease severity. Therefore, detection and virulence determination of D. nodosus is important for proper diagnosis of the disease. Today this is possible by real-time PCR analysis. Analysis of large numbers of samples is costly and laborious; therefore, pooling of individual samples is common in surveillance programs. However, pooling can reduce the sensitivity of the method. The aim of this study was to develop a pooling method for real-time PCR analysis that would allow sensitive detection and simultaneous virulence determination of D. nodosus. A total of 225 sheep from 17 flocks were sampled using ESwabs within the Swedish Footrot Control Program in 2014. Samples were first analysed individually and then in pools of five by real-time PCR assays targeting the 16S rRNA and aprV2/B2 genes of D. nodosus. Each pool consisted of four negative and one positive D. nodosus samples with varying amounts of the bacterium. In the individual analysis, 61 (27.1%) samples were positive in the 16S rRNA and the aprV2/B2 PCR assays and 164 (72.9%) samples were negative. All samples positive in the aprV2/B2 PCR-assay were of aprB2 variant. The pooled analysis showed that all 41 pools were also positive for D. nodosus 16S rRNA and the aprB2 variant. The diagnostic sensitivity for pooled and individual samples was therefore similar. Our method includes concentration of the bacteria before DNA-extraction. This may account for the maintenance of diagnostic sensitivity. Diagnostic sensitivity in the real-time PCR assays of the pooled samples were comparable to the sensitivity obtained for individually analysed samples. Even sub-clinical infections were able to be detected in the pooled PCR samples which is important for control of the disease. This method may therefore be implemented in footrot control programs where it can replace analysis of individual samples.

  9. Chemical process to separate iron oxides particles in pottery sample for EPR dating

    Science.gov (United States)

    Watanabe, S.; Farias, T. M. B.; Gennari, R. F.; Ferraz, G. M.; Kunzli, R.; Chubaci, J. F. D.

    2008-12-01

    Ancient potteries usually are made of the local clay material, which contains relatively high concentration of iron. The powdered samples are usually quite black, due to magnetite, and, although they can be used for thermoluminescene (TL) dating, it is easiest to obtain better TL reading when clearest natural or pre-treated sample is used. For electron paramagnetic resonance (EPR) measurements, the huge signal due to iron spin-spin interaction, promotes an intense interference overlapping any other signal in this range. Sample dating is obtained by dividing the radiation dose, determined by the concentration of paramagnetic species generated by irradiation, by the natural dose so as a consequence, EPR dating cannot be used, since iron signal do not depend on radiation dose. In some cases, the density separation method using hydrated solution of sodium polytungstate [Na 6(H 2W 12O 40)·H 2O] becomes useful. However, the sodium polytungstate is very expensive in Brazil; hence an alternative method for eliminating this interference is proposed. A chemical process to eliminate about 90% of magnetite was developed. A sample of powdered ancient pottery was treated in a mixture (3:1:1) of HCl, HNO 3 and H 2O 2 for 4 h. After that, it was washed several times in distilled water to remove all acid matrixes. The original black sample becomes somewhat clearer. The resulting material was analyzed by plasma mass spectrometry (ICP-MS), with the result that the iron content is reduced by a factor of about 9. In EPR measurements a non-treated natural ceramic sample shows a broad spin-spin interaction signal, the chemically treated sample presents a narrow signal in g = 2.00 region, possibly due to a radical of (SiO 3) 3-, mixed with signal of remaining iron [M. Ikeya, New Applications of Electron Spin Resonance, World Scientific, Singapore, 1993, p. 285]. This signal increases in intensity under γ-irradiation. However, still due to iron influence, the additive method yielded too

  10. Effect of Preparation Method on Phase Formation Process and Structural and Magnetic Properties of Mn2.5Ge Samples

    Directory of Open Access Journals (Sweden)

    R. Sobhani

    2016-12-01

    Full Text Available In this paper, the phase formation process of Mn2.5Ge samples, prepared by mechanical alloying of Mn and Ge metal powders and annealing, has been studied. Results showed that in the milled samples the stable phase is Mn11Ge8 compound with orthorhombic structure and Pnam space group. The value of saturation magnetization increases by increasing milling time from 0.2 up to 1.95 (Am2Kg-1. The remanece of the samples increases by increasing the milling time while the coercivity decreases. Annealing of 15-hour milled sample results in disappearance of Mn and Ge and the formation of new phases of Mn3Ge, Mn5Ge2, Mn5Ge3 and Mn2.3Ge. Mn3Ge is the main phase with Do22 tetragonal structure and I4/mmm space group which is stable and dominant. The enhancement of saturation magnetization in the annealed sample is related to the formation of three new magnetic phases and the increase of coercivity is due to the presence of Mn3Ge compound with tetragonal structure. Studies were replicated on samples made by arc melting method to compare the results and to investigate the effect of the preparation method on phase formation and structural and magnetic properties of the materials. In these samples the saturation value was in range of 0.2 up to 1.95 (Am2Kg-1 depending on preparation methods. Rietveld refinement shows that Mn2.3Ge sample prepared from arc melted under 620oC anealing is single phase. Magnetic analysis of this sample show a saturation magnetization of 5.252(Am2Kg-1 and 0.005 T coercive field.

  11. Real-Time Audio Processing on the T-CREST Multicore Platform

    DEFF Research Database (Denmark)

    Ausin, Daniel Sanz; Pezzarossa, Luca; Schoeberl, Martin

    2017-01-01

    of the audio signal. This paper presents a real-time multicore audio processing system based on the T-CREST platform. T-CREST is a time-predictable multicore processor for real-time embedded systems. Multiple audio effect tasks have been implemented, which can be connected together in different configurations...... forming sequential and parallel effect chains, and using a network-onchip for intercommunication between processors. The evaluation of the system shows that real-time processing of multiple effect configurations is possible, and that the estimation and control of latency ensures real-time behavior.......Multicore platforms are nowadays widely used for audio processing applications, due to the improvement of computational power that they provide. However, some of these systems are not optimized for temporally constrained environments, which often leads to an undesired increase in the latency...

  12. Method of parallel processing in SANPO real time system

    International Nuclear Information System (INIS)

    Ostrovnoj, A.I.; Salamatin, I.M.

    1981-01-01

    A method of parellel processing in SANPO real time system is described. Algorithms of data accumulation and preliminary processing in this system as a parallel processes using a specialized high level programming language are described. Hierarchy of elementary processes are also described. It provides the synchronization of concurrent processes without semaphors. The developed means are applied to the systems of experiment automation using SM-3 minicomputers [ru

  13. Data-aware remaining time prediction of business process instances

    NARCIS (Netherlands)

    Polato, M.; Sperduti, A.; Burattin, A.; Leoni, de M.

    2014-01-01

    Accurate prediction of the completion time of a business process instance would constitute a valuable tool when managing processes under service level agreement constraints. Such prediction, however, is a very challenging task. A wide variety of factors could influence the trend of a process

  14. On the time-homogeneous Ornstein–Uhlenbeck process in the foreign exchange rates

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Regina C.B. da, E-mail: regina@quimica-industrial.com [Department of Mathematics, Instituto Federal de Goiás, Goiânia, Goiás 74055-110 (Brazil); International Center for Condensed Matter Physics, Instituto de Física, Universidade de Brasília, Caixa Postal 04455, 70919-970, Brasília, Distrito Federal (Brazil); Matsushita, Raul Y. [Department of Statistics, Universidade de Brasília, 70919-970, Brasília, Distrito Federal (Brazil); Castro, Márcio T. de; Figueiredo, Annibal [International Center for Condensed Matter Physics, Instituto de Física, Universidade de Brasília, Caixa Postal 04455, 70919-970, Brasília, Distrito Federal (Brazil)

    2015-10-02

    Since Gaussianity and stationarity assumptions cannot be fulfilled by financial data, the time-homogeneous Ornstein–Uhlenbeck (THOU) process was introduced as a candidate model to describe time series of financial returns [1]. It is an Ornstein–Uhlenbeck (OU) process in which these assumptions are replaced by linearity and time-homogeneity. We employ the OU and THOU processes to analyze daily foreign exchange rates against the US dollar. We confirm that the OU process does not fit the data, while in most cases the first four cumulants patterns from data can be described by the THOU process. However, there are some exceptions in which the data do not follow linearity or time-homogeneity assumptions. - Highlights: • Gaussianity and stationarity assumptions replaced by linearity and time-homogeneity. • We revisit the time-homogeneous Ornstein–Uhlenbeck (THOU) process. • We employ the THOU process to analyze foreign exchange rates against the US dollar. • The first four cumulants patterns from data can be described by the THOU process.

  15. On the time-homogeneous Ornstein–Uhlenbeck process in the foreign exchange rates

    International Nuclear Information System (INIS)

    Fonseca, Regina C.B. da; Matsushita, Raul Y.; Castro, Márcio T. de; Figueiredo, Annibal

    2015-01-01

    Since Gaussianity and stationarity assumptions cannot be fulfilled by financial data, the time-homogeneous Ornstein–Uhlenbeck (THOU) process was introduced as a candidate model to describe time series of financial returns [1]. It is an Ornstein–Uhlenbeck (OU) process in which these assumptions are replaced by linearity and time-homogeneity. We employ the OU and THOU processes to analyze daily foreign exchange rates against the US dollar. We confirm that the OU process does not fit the data, while in most cases the first four cumulants patterns from data can be described by the THOU process. However, there are some exceptions in which the data do not follow linearity or time-homogeneity assumptions. - Highlights: • Gaussianity and stationarity assumptions replaced by linearity and time-homogeneity. • We revisit the time-homogeneous Ornstein–Uhlenbeck (THOU) process. • We employ the THOU process to analyze foreign exchange rates against the US dollar. • The first four cumulants patterns from data can be described by the THOU process

  16. Application of Bf-EVALPSN to Real-time Process Order Control

    International Nuclear Information System (INIS)

    Nakamatsu, Kazumi; Akama, Seiki; Abe, Jair M.

    2009-01-01

    We have already proposed a paraconsistent annotated logic program called EVALPSN. In this paper, EVALPSN is developed to deal with before-after relations between two processes (time intervals), and its application to real-time process order control based on logical safety verification.

  17. SCADA based radioactive sample bottle delivery system for fuel reprocessing project

    International Nuclear Information System (INIS)

    Kaushik, Subrat; Munj, Niket; Chauhan, R.K.; Kumar, Pramod; Mishra, A.C.

    2011-01-01

    Radioactive samples of process streams need to be analyzed in centralized control lab for measuring concentration of heavy elements as well as activity at various stages of re-processing plants. The sample is taken from biologically shielded process cells remotely through sampling blisters in sample bottles. These are then transferred to control lab located about 50 meters using vacuum transfer system. The bottle movement is tracked from origin to destination in rich HMI SCADA system using Infra-red non contact type proximity sensors located along sampling line and these sensors are connected to PLC in a fail-safe mode. The sample bottle travels at a speed of 10 m/s under vacuum motive force and the detection time is of the order of 1 mS. The contact time Flow meters have been used to know the air flow in sampling line

  18. Real-time digital signal processing fundamentals, implementations and applications

    CERN Document Server

    Kuo, Sen M; Tian, Wenshun

    2013-01-01

    Combines both the DSP principles and real-time implementations and applications, and now updated with the new eZdsp USB Stick, which is very low cost, portable and widely employed at many DSP labs. Real-Time Digital Signal Processing introduces fundamental digital signal processing (DSP) principles and will be updated to include the latest DSP applications, introduce new software development tools and adjust the software design process to reflect the latest advances in the field. In the 3rd edition of the book, the key aspect of hands-on experiments will be enhanced to make the DSP principle

  19. Time-Efficiency of Sorting Chironomidae Surface-Floating Pupal Exuviae Samples from Urban Trout Streams in Northeast Minnesota, USA

    Directory of Open Access Journals (Sweden)

    Alyssa M Anderson

    2012-10-01

    Full Text Available Collections of Chironomidae surface-floating pupal exuviae (SFPE provide an effective means of assessing water quality in streams. Although not widely used in the United States, the technique is not new and has been shown to be more cost-efficient than traditional dip-net sampling techniques in organically enriched stream in an urban landscape. The intent of this research was to document the efficiency of sorting SFPE samples relative to dip-net samples in trout streams with catchments varying in amount of urbanization and differences in impervious surface. Samples of both SFPE and dip-nets were collected from 17 sample sites located on 12 trout streams in Duluth, MN, USA. We quantified time needed to sort subsamples of 100 macroinvertebrates from dip-net samples, and less than or greater than 100 chironomid exuviae from SFPE samples. For larger samples of SFPE, the time required to subsample up to 300 exuviae was also recorded. The average time to sort subsamples of 100 specimens was 22.5 minutes for SFPE samples, compared to 32.7 minutes for 100 macroinvertebrates in dip-net samples. Average time to sort up to 300 exuviae was 37.7 minutes. These results indicate that sorting SFPE samples is more time-efficient than traditional dip-net techniques in trout streams with varying catchment characteristics.doi: 10.5324/fn.v31i0.1380.Published online: 17 October 2012.

  20. Detecting oscillatory patterns and time lags from proxy records with non-uniform sampling: Some pitfalls and possible solutions

    Science.gov (United States)

    Donner, Reik

    2013-04-01

    Time series analysis offers a rich toolbox for deciphering information from high-resolution geological and geomorphological archives and linking the thus obtained results to distinct climate and environmental processes. Specifically, on various time-scales from inter-annual to multi-millenial, underlying driving forces exhibit more or less periodic oscillations, the detection of which in proxy records often allows linking them to specific mechanisms by which the corresponding drivers may have affected the archive under study. A persistent problem in geomorphology is that available records do not present a clear signal of the variability of environmental conditions, but exhibit considerable uncertainties of both the measured proxy variables and the associated age model. Particularly, time-scale uncertainty as well as the heterogeneity of sampling in the time domain are source of severe conceptual problems that may lead to false conclusions about the presence or absence of oscillatory patterns and their mutual phasing in different archives. In my presentation, I will discuss how one can cope with non-uniformly sampled proxy records to detect and quantify oscillatory patterns in one or more data sets. For this purpose, correlation analysis is reformulated using kernel estimates which are found superior to classical estimators based on interpolation or Fourier transform techniques. In order to characterize non-stationary or noisy periodicities and their relative phasing between different records, an extension of continuous wavelet transform is utilized. The performance of both methods is illustrated for different case studies. An extension to explicitly considering time-scale uncertainties by means of Bayesian techniques is briefly outlined.

  1. Virtual instrumentation technique used in the nuclear digital signal processing system design: Energy and time measurement tests

    International Nuclear Information System (INIS)

    Pechousek, J.; Prochazka, R.; Prochazka, V.; Frydrych, J.

    2011-01-01

    In this report, computer-based digital signal processing system with a 200 MS s -1 sampling digitizer is presented. Virtual instrumentation technique is used to easily develop a system which provides spectroscopy measurements such as amplitude and time signal analysis, with the time-of-flight facility. Several test measurements were performed to determine the characteristics of a system. The presented system may find its application in the coincidence measurement since the system is usable for different types of detectors and sensitive to decay lifetimes from tens of nanoseconds to seconds.

  2. Bidirectional Relationships Between Parenting Processes and Deviance in a Sample of Inner-City African American Youth

    Science.gov (United States)

    Harris, Charlene; Vazsonyi, Alexander T.; Bolland, John M.

    2016-01-01

    The current study assessed for bidirectional relationships among supportive parenting (knowledge), negative parenting (permissiveness), and deviance in a sample (N = 5,325) of poor, inner-city African American youth from the Mobile Youth Survey (MYS) over 4 years. Cross-lagged path analysis provided evidence of significant bidirectional paths among parenting processes (knowledge and permissiveness) and deviance over time. Follow-up multigroup tests provided only modest evidence of dissimilar relationships by sex and by developmental periods. The findings improve our understanding of developmental changes between parenting behaviors and deviance during adolescence and extended current research of the bidirectionality of parent and child relationships among inner-city African American youth. PMID:28316460

  3. Real-time Color Codes for Assessing Learning Process

    OpenAIRE

    Dzelzkalēja, L; Kapenieks, J

    2016-01-01

    Effective assessment is an important way for improving the learning process. There are existing guidelines for assessing the learning process, but they lack holistic digital knowledge society considerations. In this paper the authors propose a method for real-time evaluation of students’ learning process and, consequently, for quality evaluation of teaching materials both in the classroom and in the distance learning environment. The main idea of the proposed Color code method (CCM) is to use...

  4. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  5. Real-time data acquisition and processing platform for fusion experiments

    International Nuclear Information System (INIS)

    Ruiz, M.; Barrera, E.; Lopez, S.; Machon, D.; Vega, J.; Sanchez, E.

    2004-01-01

    This paper describes the features of the hardware and low-level software of the PXI real-time data acquisition and processing system developed for the TJ-II device located in the Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT) in Madrid. This system fulfills three objectives: (1) to increase processing capabilities of standard data acquisition systems by adding specific processing cards, (2) to acquire and process data in real time with a view to deployment on steady state fusion devices, and (3) to develop the data acquisition and processing applications using graphical languages like LabView

  6. Rapid detection of Opisthorchis viverrini and Strongyloides stercoralis in human fecal samples using a duplex real-time PCR and melting curve analysis.

    Science.gov (United States)

    Janwan, Penchom; Intapan, Pewpan M; Thanchomnang, Tongjit; Lulitanond, Viraphong; Anamnart, Witthaya; Maleewong, Wanchai

    2011-12-01

    Human opisthorchiasis caused by the liver fluke Opisthorchis viverrini is an endemic disease in Southeast Asian countries including the Lao People's Democratic Republic, Cambodia, Vietnam, and Thailand. Infection with the soil-transmitted roundworm Strongyloides stercoralis is an important problem worldwide. In some areas, both parasitic infections are reported as co-infections. A duplex real-time fluorescence resonance energy transfer (FRET) PCR merged with melting curve analysis was developed for the rapid detection of O. viverrini and S. stercoralis in human fecal samples. Duplex real-time FRET PCR is based on fluorescence melting curve analysis of a hybrid of amplicons generated from two genera of DNA elements: the 162 bp pOV-A6 DNA sequence specific to O. viverrini and the 244 bp 18S rRNA sequence specific to S. stercoralis, and two pairs of specific fluorophore-labeled probes. Both O. viverrini and S. stercoralis can be differentially detected in infected human fecal samples by this process through their different fluorescence channels and melting temperatures. Detection limit of the method was as little as two O. viverrini eggs and four S. stercoralis larvae in 100 mg of fecal sample. The assay could distinguish the DNA of both parasites from the DNA of negative fecal samples and fecal samples with other parasite materials, as well as from the DNA of human leukocytes and other control parasites. The technique showed 100% sensitivity and specificity. The introduced duplex real-time FRET PCR can reduce labor time and reagent costs and is not prone to carry over contamination. The method is important for simultaneous detection especially in areas where both parasites overlap incidence and is useful as the screening tool in the returning travelers and immigrants to industrialized countries where number of samples in the diagnostic units will become increasing.

  7. Automated sampling and data processing derived from biomimetic membranes

    DEFF Research Database (Denmark)

    Perry, Mark; Vissing, Thomas; Boesen, P.

    2009-01-01

    data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition...... applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet (TM)) for efficient data management......Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new...

  8. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    Science.gov (United States)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  9. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods

    Science.gov (United States)

    He, Jiayi; Shang, Pengjian; Xiong, Hui

    2018-06-01

    Stocks, as the concrete manifestation of financial time series with plenty of potential information, are often used in the study of financial time series. In this paper, we utilize the stock data to recognize their patterns through out the dissimilarity matrix based on modified cross-sample entropy, then three-dimensional perceptual maps of the results are provided through multidimensional scaling method. Two modified multidimensional scaling methods are proposed in this paper, that is, multidimensional scaling based on Kronecker-delta cross-sample entropy (MDS-KCSE) and multidimensional scaling based on permutation cross-sample entropy (MDS-PCSE). These two methods use Kronecker-delta based cross-sample entropy and permutation based cross-sample entropy to replace the distance or dissimilarity measurement in classical multidimensional scaling (MDS). Multidimensional scaling based on Chebyshev distance (MDSC) is employed to provide a reference for comparisons. Our analysis reveals a clear clustering both in synthetic data and 18 indices from diverse stock markets. It implies that time series generated by the same model are easier to have similar irregularity than others, and the difference in the stock index, which is caused by the country or region and the different financial policies, can reflect the irregularity in the data. In the synthetic data experiments, not only the time series generated by different models can be distinguished, the one generated under different parameters of the same model can also be detected. In the financial data experiment, the stock indices are clearly divided into five groups. Through analysis, we find that they correspond to five regions, respectively, that is, Europe, North America, South America, Asian-Pacific (with the exception of mainland China), mainland China and Russia. The results also demonstrate that MDS-KCSE and MDS-PCSE provide more effective divisions in experiments than MDSC.

  10. Abstractions for aperiodic multiprocessor scheduling of real-time stream processing applications

    NARCIS (Netherlands)

    Hausmans, J.P.H.M.

    2015-01-01

    Embedded multiprocessor systems are often used in the domain of real-time stream processing applications to keep up with increasing power and performance requirements. Examples of such real-time stream processing applications are digital radio baseband processing and WLAN transceivers. These stream

  11. Modular time division multiplexer: Efficient simultaneous characterization of fast and slow transients in multiple samples

    Science.gov (United States)

    Kim, Stephan D.; Luo, Jiajun; Buchholz, D. Bruce; Chang, R. P. H.; Grayson, M.

    2016-09-01

    A modular time division multiplexer (MTDM) device is introduced to enable parallel measurement of multiple samples with both fast and slow decay transients spanning from millisecond to month-long time scales. This is achieved by dedicating a single high-speed measurement instrument for rapid data collection at the start of a transient, and by multiplexing a second low-speed measurement instrument for slow data collection of several samples in parallel for the later transients. The MTDM is a high-level design concept that can in principle measure an arbitrary number of samples, and the low cost implementation here allows up to 16 samples to be measured in parallel over several months, reducing the total ensemble measurement duration and equipment usage by as much as an order of magnitude without sacrificing fidelity. The MTDM was successfully demonstrated by simultaneously measuring the photoconductivity of three amorphous indium-gallium-zinc-oxide thin films with 20 ms data resolution for fast transients and an uninterrupted parallel run time of over 20 days. The MTDM has potential applications in many areas of research that manifest response times spanning many orders of magnitude, such as photovoltaics, rechargeable batteries, amorphous semiconductors such as silicon and amorphous indium-gallium-zinc-oxide.

  12. Towards real-time remote processing of laparoscopic video

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  13. Bayesian inference for multivariate point processes observed at sparsely distributed times

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl; Møller, Jesper; Aukema, B.H.

    We consider statistical and computational aspects of simulation-based Bayesian inference for a multivariate point process which is only observed at sparsely distributed times. For specicity we consider a particular data set which has earlier been analyzed by a discrete time model involving unknown...... normalizing constants. We discuss the advantages and disadvantages of using continuous time processes compared to discrete time processes in the setting of the present paper as well as other spatial-temporal situations. Keywords: Bark beetle, conditional intensity, forest entomology, Markov chain Monte Carlo...

  14. A time-sorting pitfall trap and temperature datalogger for the sampling of surface-active arthropods

    Directory of Open Access Journals (Sweden)

    Marshall S. McMunn

    2017-04-01

    Full Text Available Nearly all arthropods display consistent patterns of activity according to time of day. These patterns of activity often limit the extent of animal co-occurrence in space and time. Quantifying when particular species are active and how activity varies with environmental conditions is difficult without the use of automated devices due to the need for continuous monitoring. Time-sorting pitfall traps passively collect active arthropods into containers with known beginning and end sample times. The trap described here, similar to previous designs, sorts arthropods by the time they fall into the trap using a rotating circular rack of vials. This trap represents a reduction in size, cost, and time of construction, while increasing the number of time windows sampled. The addition of temperature data collection extends functionality, while the use of store-bought components and inclusion of customizable software make the trap easy to reproduce and use.

  15. Sampling Methodologies for Epidemiologic Surveillance of Men Who Have Sex with Men and Transgender Women in Latin America: An Empiric Comparison of Convenience Sampling, Time Space Sampling, and Respondent Driven Sampling

    Science.gov (United States)

    Clark, J. L.; Konda, K. A.; Silva-Santisteban, A.; Peinado, J.; Lama, J. R.; Kusunoki, L.; Perez-Brumer, A.; Pun, M.; Cabello, R.; Sebastian, J. L.; Suarez-Ognio, L.; Sanchez, J.

    2014-01-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June–August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through TSS, and 127 through RDS. The TSS sample included the largest proportion of TW (30.7 %) and the lowest percentage of subjects who had previously participated in HIV/STI research (14.9 %). The prevalence of newly diagnosed HIV infection, according to participants’ self-reported previous HIV diagnosis, was highest among TSS recruits (17.9 %) compared with RDS (12.6 %) and CS (10.2 %). TSS identified diverse populations of MSM/TW with higher prevalences of HIV/STIs not accessed by other methods. PMID:24362754

  16. Sampling methodologies for epidemiologic surveillance of men who have sex with men and transgender women in Latin America: an empiric comparison of convenience sampling, time space sampling, and respondent driven sampling.

    Science.gov (United States)

    Clark, J L; Konda, K A; Silva-Santisteban, A; Peinado, J; Lama, J R; Kusunoki, L; Perez-Brumer, A; Pun, M; Cabello, R; Sebastian, J L; Suarez-Ognio, L; Sanchez, J

    2014-12-01

    Alternatives to convenience sampling (CS) are needed for HIV/STI surveillance of most-at-risk populations in Latin America. We compared CS, time space sampling (TSS), and respondent driven sampling (RDS) for recruitment of men who have sex with men (MSM) and transgender women (TW) in Lima, Peru. During concurrent 60-day periods from June-August, 2011, we recruited MSM/TW for epidemiologic surveillance using CS, TSS, and RDS. A total of 748 participants were recruited through CS, 233 through TSS, and 127 through RDS. The TSS sample included the largest proportion of TW (30.7 %) and the lowest percentage of subjects who had previously participated in HIV/STI research (14.9 %). The prevalence of newly diagnosed HIV infection, according to participants' self-reported previous HIV diagnosis, was highest among TSS recruits (17.9 %) compared with RDS (12.6 %) and CS (10.2 %). TSS identified diverse populations of MSM/TW with higher prevalences of HIV/STIs not accessed by other methods.

  17. Safety, Liveness and Run-time Refinement for Modular Process-Aware Information Systems with Dynamic Sub Processes

    DEFF Research Database (Denmark)

    Debois, Søren; Hildebrandt, Thomas; Slaats, Tijs

    2015-01-01

    and verification of flexible, run-time adaptable process-aware information systems, moved into practice via the Dynamic Condition Response (DCR) Graphs notation co-developed with our industrial partner. Our key contributions are: (1) A formal theory of dynamic sub-process instantiation for declarative, event......We study modularity, run-time adaptation and refinement under safety and liveness constraints in event-based process models with dynamic sub-process instantiation. The study is part of a larger programme to provide semantically well-founded technologies for modelling, implementation......-based processes under safety and liveness constraints, given as the DCR* process language, equipped with a compositional operational semantics and conservatively extending the DCR Graphs notation; (2) an expressiveness analysis revealing that the DCR* process language is Turing-complete, while the fragment cor...

  18. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    Science.gov (United States)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  19. Gardening process of lunar surface layer inferred from the galactic cosmic-ray exposure ages of lunar samples

    International Nuclear Information System (INIS)

    Iriyama, Jun; Honda, Masatake.

    1979-01-01

    From the cosmic-ray exposure age data, (time scale 10 7 - 10 8 years), of the lunar surface materials, we discuss the gardening process of the lunar surface layer caused by the meteoroid impact cratering. At steady state, it is calculated that, in the region within 10 - 50 m of the surface, a mixing rate of 10 -4 to 10 -5 mm/yr is necessary to match the exposure ages. Observed exposure ages of the lunar samples could be explained by the gardening effect calculated using a crater formation rate which is slightly modified from the current crater population data. (author)

  20. Real-time PCR to supplement gold-standard culture-based detection of Legionella in environmental samples.

    Science.gov (United States)

    Collins, S; Jorgensen, F; Willis, C; Walker, J

    2015-10-01

    Culture remains the gold-standard for the enumeration of environmental Legionella. However, it has several drawbacks including long incubation and poor sensitivity, causing delays in response times to outbreaks of Legionnaires' disease. This study aimed to validate real-time PCR assays to quantify Legionella species (ssrA gene), Legionella pneumophila (mip gene) and Leg. pneumophila serogroup-1 (wzm gene) to support culture-based detection in a frontline public health laboratory. Each qPCR assay had 100% specificity, excellent sensitivity (5 GU/reaction) and reproducibility. Comparison of the assays to culture-based enumeration of Legionella from 200 environmental samples showed that they had a negative predictive value of 100%. Thirty eight samples were positive for Legionella species by culture and qPCR. One hundred samples were negative by both methods, whereas 62 samples were negative by culture but positive by qPCR. The average log10 increase between culture and qPCR for Legionella spp. and Leg. pneumophila was 0·72 (P = 0·0002) and 0·51 (P = 0·006), respectively. The qPCR assays can be conducted on the same 1 l water sample as culture thus can be used as a supplementary technique to screen out negative samples and allow more rapid indication of positive samples. The assay could prove informative in public health investigations to identify or rule out sources of Legionella as well as to specifically identify Leg. pneumophila serogroup 1 in a timely manner not possible with culture. © 2015 The Society for Applied Microbiology.

  1. Improvements to sample processing and measurement to enable more widespread environmental application of tritium

    Energy Technology Data Exchange (ETDEWEB)

    Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of T behavior in the environment.

  2. Control Charts for Processes with an Inherent Between-Sample Variation

    Directory of Open Access Journals (Sweden)

    Eva Jarošová

    2018-06-01

    Full Text Available A number of processes to which statistical control is applied are subject to various effects that cause random changes in the mean value. The removal of these fluctuations is either technologically impossible or economically disadvantageous under current conditions. The frequent occurrence of signals in the Shewhart chart due to these fluctuations is then undesirable and therefore the conventional control limits need to be extended. Several approaches to the design of the control charts with extended limits are presented in the paper and applied on the data from a real production process. The methods assume samples of size greater than 1. The performance of the charts is examined using the operating characteristic and average run length. The study reveals that in many cases, reducing the risk of false alarms is insufficient.

  3. Sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage.

    Science.gov (United States)

    Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei

    2018-05-10

    The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Analysis of volatile organic compounds in compost samples: A potential tool to determine appropriate composting time.

    Science.gov (United States)

    Zhu, Fengxiang; Pan, Zaifa; Hong, Chunlai; Wang, Weiping; Chen, Xiaoyang; Xue, Zhiyong; Yao, Yanlai

    2016-12-01

    Changes in volatile organic compound contents in compost samples during pig manure composting were studied using a headspace, solid-phase micro-extraction method (HS-SPME) followed by gas chromatography with mass spectrometric detection (GC/MS). Parameters affecting the SPME procedure were optimized as follows: the coating was carbon molecular sieve/polydimethylsiloxane (CAR/PDMS) fiber, the temperature was 60°C and the time was 30min. Under these conditions, 87 compounds were identified from 17 composting samples. Most of the volatile components could only be detected before day 22. However, benzenes, alkanes and alkenes increased and eventually stabilized after day 22. Phenol and acid substances, which are important factors for compost quality, were almost undetectable on day 39 in natural compost (NC) samples and on day 13 in maggot-treated compost (MC) samples. Our results indicate that the approach can be effectively used to determine the composting times by analysis of volatile substances in compost samples. An appropriate composting time not only ensures the quality of compost and reduces the loss of composting material but also reduces the generation of hazardous substances. The appropriate composting times for MC and NC were approximately 22days and 40days, respectively, during the summer in Zhejiang. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, C. M.; Buchhave, P.; K. George, W.

    algorithms; sample and-hold and the direct spectral estimator without residence time weighting. The computer generated signal is a Poisson process with a sample rate proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea...

  6. Freeze core sampling to validate time-lapse resistivity monitoring of the hyporheic zone.

    Science.gov (United States)

    Toran, Laura; Hughes, Brian; Nyquist, Jonathan; Ryan, Robert

    2013-01-01

    A freeze core sampler was used to characterize hyporheic zone storage during a stream tracer test. The pore water from the frozen core showed tracer lingered in the hyporheic zone after the tracer had returned to background concentration in collocated well samples. These results confirmed evidence of lingering subsurface tracer seen in time-lapse electrical resistivity tomographs. The pore water exhibited brine exclusion (ion concentrations in ice lower than source water) in a sediment matrix, despite the fast freezing time. Although freeze core sampling provided qualitative evidence of lingering tracer, it proved difficult to quantify tracer concentration because the amount of brine exclusion during freezing could not be accurately determined. Nonetheless, the additional evidence for lingering tracer supports using time-lapse resistivity to detect regions of low fluid mobility within the hyporheic zone that can act as chemically reactive zones of importance in stream health. © 2012, The Author(s). GroundWater © 2012, National Ground Water Association.

  7. Note: A simple image processing based fiducial auto-alignment method for sample registration.

    Science.gov (United States)

    Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne

    2015-08-01

    A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.

  8. IDENTIFICATION ASPECT OF METHODOLOGY DESIGN OF CONTROL SYSTEM TIME-VARIANT PROCESS

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaia

    2014-01-01

    Full Text Available Summary. Specificity of a food manufacture demands perfection of automatic control systems of processes in devices, units and installations. Creation of an adaptive control system by technological process of a food on the basis of model of control object it is necessary to carry out the additional analysis for choice algorithm of identification on real enough to representative sample of input data and output signal/data. In article on the basis of simulation it is analyzed over 53 algorithms of recurrent identification plus the basic modifications of these algorithms by 47 criteria for time-varying multivariable linear dynamic objects. On the basis of this analysis for engineering practice for a considered class of objects some algorithms are recommended. Possibilities of the software suite having for today the fullest set of parametrical identification algorithms are discussed. For given specific conditions of comparison in the package identification algorithms for identification of stationary coefficients in the equation object of the most effective were: Yzerman-1, Kaczmarz, Nagumo-Noda, Rastrigin, Kalman filter, the forgetting factor, Zipkin. When pointwise object - Kaczmarz, Nagumo-Noda, Kalman filter; showed the best result identification algorithm-Nagumo Noda.

  9. Data processing device

    International Nuclear Information System (INIS)

    Kita, Yoshio.

    1994-01-01

    A data processing device for use in a thermonuclear testing device comprises a frequency component judging section for analog signals, a sample time selection section based on the result of the judgement, a storing memory section for selecting digital data memorized in the sampling time. Namely, the frequency components of the analog signals are detected by the frequency component judging section, and one of a plurality of previously set sampling times is selected by the sampling time selection section based on the result of the judgement of the frequency component judging section. Then, digital data obtained by A/D conversion are read and preliminarily memorized in the storing memory section. Subsequently, the digital data memorized in the sampling time selected by the sampling time selection section are selected and transmitted to a superior computer. The amount of data to be memorized can greatly reduced, to reduce the cost. (N.H.)

  10. The UK Biobank sample handling and storage protocol for the collection, processing and archiving of human blood and urine.

    Science.gov (United States)

    Elliott, Paul; Peakman, Tim C

    2008-04-01

    UK Biobank is a large prospective study in the UK to investigate the role of genetic factors, environmental exposures and lifestyle in the causes of major diseases of late and middle age. Extensive data and biological samples are being collected from 500,000 participants aged between 40 and 69 years. The biological samples that are collected and how they are processed and stored will have a major impact on the future scientific usefulness of the UK Biobank resource. The aim of the UK Biobank sample handling and storage protocol is to specify methods for the collection and storage of participant samples that give maximum scientific return within the available budget. Processing or storage methods that, as far as can be predicted, will preclude current or future assays have been avoided. The protocol was developed through a review of the literature on sample handling and processing, wide consultation within the academic community and peer review. Protocol development addressed which samples should be collected, how and when they should be processed and how the processed samples should be stored to ensure their long-term integrity. The recommended protocol was extensively tested in a series of validation studies. UK Biobank collects about 45 ml blood and 9 ml of urine with minimal local processing from each participant using the vacutainer system. A variety of preservatives, anti-coagulants and clot accelerators is used appropriate to the expected end use of the samples. Collection of other material (hair, nails, saliva and faeces) was also considered but rejected for the full cohort. Blood and urine samples from participants are transported overnight by commercial courier to a central laboratory where they are processed and aliquots of urine, plasma, serum, white cells and red cells stored in ultra-low temperature archives. Aliquots of whole blood are also stored for potential future production of immortalized cell lines. A standard panel of haematology assays is

  11. Time Processing in Children with Tourette's Syndrome

    Science.gov (United States)

    Vicario, Carmelo Mario; Martino, Davide; Spata, Felice; Defazio, Giovanni; Giacche, Roberta; Martino, Vito; Rappo, Gaetano; Pepi, Anna Maria; Silvestri, Paola Rosaria; Cardona, Francesco

    2010-01-01

    Background: Tourette syndrome (TS) is characterized by dysfunctional connectivity between prefrontal cortex and sub-cortical structures, and altered meso-cortical and/or meso-striatal dopamine release. Since time processing is also regulated by fronto-striatal circuits and modulated by dopaminergic transmission, we hypothesized that time…

  12. Judgment sampling: a health care improvement perspective.

    Science.gov (United States)

    Perla, Rocco J; Provost, Lloyd P

    2012-01-01

    Sampling plays a major role in quality improvement work. Random sampling (assumed by most traditional statistical methods) is the exception in improvement situations. In most cases, some type of "judgment sample" is used to collect data from a system. Unfortunately, judgment sampling is not well understood. Judgment sampling relies upon those with process and subject matter knowledge to select useful samples for learning about process performance and the impact of changes over time. It many cases, where the goal is to learn about or improve a specific process or system, judgment samples are not merely the most convenient and economical approach, they are technically and conceptually the most appropriate approach. This is because improvement work is done in the real world in complex situations involving specific areas of concern and focus; in these situations, the assumptions of classical measurement theory neither can be met nor should an attempt be made to meet them. The purpose of this article is to describe judgment sampling and its importance in quality improvement work and studies with a focus on health care settings.

  13. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  14. Spectral BRDF measurements of metallic samples for laser processing applications

    International Nuclear Information System (INIS)

    Vitali, L; Fustinoni, D; Gramazio, P; Niro, A

    2015-01-01

    The spectral bidirectional reflectance distribution function (BRDF) of metals plays an important role in industrial processing involving laser-surface interaction. In particular, in laser metal machining, absorbance is strongly dependent on the radiation incidence angle as well as on finishing and contamination grade of the surface, and in turn it can considerably affect processing results. Very recently, laser radiation is also used to structure metallic surfaces, in order to produce many particular optical effects, ranging from a high level polishing to angular color shifting. Of course, full knowledge of the spectral BRDF of these structured layers makes it possible to infer reflectance or color for any irradiation and viewing angles. In this paper, we present Vis-NIR spectral BRDF measurements of laser-polished metallic, opaque, flat samples commonly employed in such applications. The resulting optical properties seem to be dependent on the atmospheric composition during the polishing process in addition to the roughness. The measurements are carried out with a Perkin Elmer Lambda 950 double-beam spectrophotometer, equipped with the Absolute Reflectance/Transmittance Analyzer (ARTA) motorized goniometer. (paper)

  15. The time course of attentional modulation on emotional conflict processing.

    Science.gov (United States)

    Zhou, Pingyan; Yang, Guochun; Nan, Weizhi; Liu, Xun

    2016-01-01

    Cognitive conflict resolution is critical to human survival in a rapidly changing environment. However, emotional conflict processing seems to be particularly important for human interactions. This study examined whether the time course of attentional modulation on emotional conflict processing was different from cognitive conflict processing during a flanker task. Results showed that emotional N200 and P300 effects, similar to colour conflict processing, appeared only during the relevant task. However, the emotional N200 effect preceded the colour N200 effect, indicating that emotional conflict can be identified earlier than cognitive conflict. Additionally, a significant emotional N100 effect revealed that emotional valence differences could be perceived during early processing based on rough aspects of input. The present data suggest that emotional conflict processing is modulated by top-down attention, similar to cognitive conflict processing (reflected by N200 and P300 effects). However, emotional conflict processing seems to have more time advantages during two different processing stages.

  16. The effects of quantity and depth of processing on children's time perception.

    Science.gov (United States)

    Arlin, M

    1986-08-01

    Two experiments were conducted to investigate the effects of quantity and depth of processing on children's time perception. These experiments tested the appropriateness of two adult time-perception models (attentional and storage size) for younger ages. Children were given stimulus sets of equal time which varied by level of processing (deep/shallow) and quantity (list length). In the first experiment, 28 children in Grade 6 reproduced presentation times of various quantities of pictures under deep (living/nonliving categorization) or shallow (repeating label) conditions. Students also compared pairs of durations. In the second experiment, 128 children in Grades K, 2, 4, and 6 reproduced presentation times under similar conditions with three or six pictures and with deep or shallow processing requirements. Deep processing led to decreased estimation of time. Higher quantity led to increased estimation of time. Comparative judgments were influenced by quantity. The interaction between age and depth of processing was significant. Older children were more affected by depth differences than were younger children. Results were interpreted as supporting different aspects of each adult model as explanations of children's time perception. The processing effect supported the attentional model and the quantity effect supported the storage size model.

  17. Highly Reproducible Automated Proteomics Sample Preparation Workflow for Quantitative Mass Spectrometry.

    Science.gov (United States)

    Fu, Qin; Kowalski, Michael P; Mastali, Mitra; Parker, Sarah J; Sobhani, Kimia; van den Broek, Irene; Hunter, Christie L; Van Eyk, Jennifer E

    2018-01-05

    Sample preparation for protein quantification by mass spectrometry requires multiple processing steps including denaturation, reduction, alkylation, protease digestion, and peptide cleanup. Scaling these procedures for the analysis of numerous complex biological samples can be tedious and time-consuming, as there are many liquid transfer steps and timed reactions where technical variations can be introduced and propagated. We established an automated sample preparation workflow with a total processing time for 96 samples of 5 h, including a 2 h incubation with trypsin. Peptide cleanup is accomplished by online diversion during the LC/MS/MS analysis. In a selected reaction monitoring (SRM) assay targeting 6 plasma biomarkers and spiked β-galactosidase, mean intraday and interday cyclic voltammograms (CVs) for 5 serum and 5 plasma samples over 5 days were samples repeated on 3 separate days had total CVs below 20%. Similar results were obtained when the workflow was transferred to a second site: 93% of peptides had CVs below 20%. An automated trypsin digestion workflow yields uniformly processed samples in less than 5 h. Reproducible quantification of peptides was observed across replicates, days, instruments, and laboratory sites, demonstrating the broad applicability of this approach.

  18. The effect of sample grinding procedures after processing on gas production profiles and end-product formation in expander processed barley and peas

    NARCIS (Netherlands)

    Azarfar, A.; Poel, van der A.F.B.; Tamminga, S.

    2007-01-01

    Grinding is a technological process widely applied in the feed manufacturing industry and is a prerequisite for obtaining representative samples for laboratory procedures (e.g. gas production analysis). When feeds are subjected to technological processes other than grinding (e.g. expander

  19. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Science.gov (United States)

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  20. Elimination of heparin interference during microarray processing of fresh and biobank-archived blood samples.

    Science.gov (United States)

    Hebels, Dennie G A J; van Herwijnen, Marcel H M; Brauers, Karen J J; de Kok, Theo M C M; Chalkiadaki, Georgia; Kyrtopoulos, Soterios A; Kleinjans, Jos C S

    2014-07-01

    In the context of environmental health research, biobank blood samples have recently been identified as suitable for high-throughput omics analyses enabling the identification of new biomarkers of exposure and disease. However, blood samples containing the anti-coagulant heparin could complicate transcriptomic analysis because heparin may inhibit RNA polymerase causing inefficient cRNA synthesis and fluorophore labelling. We investigated the inhibitory effect of heparin and the influence of storage conditions (0 or 3 hr bench times, storage at room temperature or -80°C) on fluorophore labelling in heparinized fresh human buffy coat and whole blood biobank samples during the mRNA work-up protocol for microarray analysis. Subsequently, we removed heparin by lithium chloride (LiCl) treatment and performed a quality control analysis of LiCl-treated biobank sample microarrays to prove their suitability for downstream data analysis. Both fresh and biobank samples experienced varying degrees of heparin-induced inhibition of fluorophore labelling, making most samples unusable for microarray analysis. RNA derived from EDTA and citrate blood was not inhibited. No effect of bench time was observed but room temperature storage gave slightly better results. Strong correlations were observed between original blood sample RNA yield and the amount of synthesized cRNA. LiCl treatment restored sample quality to normal standards in both fresh and biobank samples and the previously identified correlations disappeared. Microarrays hybridized with LiCl-treated biobank samples were of excellent quality with no identifiable influence of heparin. We conclude that, to obtain high quality results, in most cases heparin removal is essential in blood-derived RNA samples intended for microarray analysis. Copyright © 2014 Wiley Periodicals, Inc.

  1. Time-resolved influences of functional DAT1 and COMT variants on visual perception and post-processing.

    Directory of Open Access Journals (Sweden)

    Stephan Bender

    Full Text Available BACKGROUND: Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1 and catechol-O-methyltransferase genes (COMT on the time-course of visual processing in a contingent negative variation (CNV task. METHODS: 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version. Early and late CNV as well as preceding visual evoked potential components were assessed. RESULTS: Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. CONCLUSIONS: Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.

  2. Sparse-sampling with time-encoded (TICO) stimulated Raman scattering for fast image acquisition

    Science.gov (United States)

    Hakert, Hubertus; Eibl, Matthias; Karpf, Sebastian; Huber, Robert

    2017-07-01

    Modern biomedical imaging modalities aim to provide researchers a multimodal contrast for a deeper insight into a specimen under investigation. A very promising technique is stimulated Raman scattering (SRS) microscopy, which can unveil the chemical composition of a sample with a very high specificity. Although the signal intensities are enhanced manifold to achieve a faster acquisition of images if compared to standard Raman microscopy, there is a trade-off between specificity and acquisition speed. Commonly used SRS concepts either probe only very few Raman transitions as the tuning of the applied laser sources is complicated or record whole spectra with a spectrometer based setup. While the first approach is fast, it reduces the specificity and the spectrometer approach records whole spectra -with energy differences where no Raman information is present-, which limits the acquisition speed. Therefore, we present a new approach based on the TICO-Raman concept, which we call sparse-sampling. The TICO-sparse-sampling setup is fully electronically controllable and allows probing of only the characteristic peaks of a Raman spectrum instead of always acquiring a whole spectrum. By reducing the spectral points to the relevant peaks, the acquisition time can be greatly reduced compared to a uniformly, equidistantly sampled Raman spectrum while the specificity and the signal to noise ratio (SNR) are maintained. Furthermore, all laser sources are completely fiber based. The synchronized detection enables a full resolution of the Raman signal, whereas the analogue and digital balancing allows shot noise limited detection. First imaging results with polystyrene (PS) and polymethylmethacrylate (PMMA) beads confirm the advantages of TICO sparse-sampling. We achieved a pixel dwell time as low as 35 μs for an image differentiating both species. The mechanical properties of the applied voice coil stage for scanning the sample currently limits even faster acquisition.

  3. Finite-Time Approach to Microeconomic and Information Exchange Processes

    Directory of Open Access Journals (Sweden)

    Serghey A. Amelkin

    2009-07-01

    Full Text Available Finite-time approach allows one to optimize regimes of processes in macrosystems when duration of the processes is restricted. Driving force of the processes is difference of intensive variables: temperatures in thermodynamics, values in economics, etc. In microeconomic systems two counterflow fluxes appear due to the only driving force. They are goods and money fluxes. Another possible case is two fluxes with the same direction. The processes of information exchange can be described by this formalism.

  4. Processing implicit control: evidence from reading times

    Directory of Open Access Journals (Sweden)

    Michael eMcCourt

    2015-10-01

    Full Text Available Sentences such as The ship was sunk to collect the insurance exhibit an unusual form of anaphora, implicit control, where neither anaphor nor antecedent is audible. The nonfinite reason clause has an understood subject, PRO, that is anaphoric; here it may be understood as naming the agent of the event of the host clause. Yet since the host is a short passive, this agent is realized by no audible dependent. The putative antecedent to PRO is therefore implicit, which it normally cannot be. What sorts of representations subserve the comprehension of this dependency? Here we present four self-paced reading time studies directed at this question. Previous work showed no processing cost for implicit versus explicit control, and took this to support the view that PRO is linked syntactically to a silent argument in the passive. We challenge this conclusion by reporting that we also find no processing cost for remote implicit control, as in: The ship was sunk. The reason was to collect the insurance. Here the dependency crosses two independent sentences, and so cannot, we argue, be mediated by syntax. Our Experiments 1-4 examined the processing of both implicit (short passive and explicit (active or long passive control in both local and remote configurations. Experiments 3 and 4 added either three days ago or just in order to the local conditions, to control for the distance between the passive and infinitival verbs, and for the predictability of the reason clause, respectively. We replicate the finding that implicit control does not impose an additional processing cost. But critically we show that remote control does not impose a processing cost either. Reading times at the reason clause were never slower when control was remote. In fact they were always faster. Thus efficient processing of local implicit control cannot show that implicit control is mediated by syntax; nor, in turn, that there is a silent but grammatically active argument in passives.

  5. On the time-homogeneous Ornstein-Uhlenbeck process in the foreign exchange rates

    Science.gov (United States)

    da Fonseca, Regina C. B.; Matsushita, Raul Y.; de Castro, Márcio T.; Figueiredo, Annibal

    2015-10-01

    Since Gaussianity and stationarity assumptions cannot be fulfilled by financial data, the time-homogeneous Ornstein-Uhlenbeck (THOU) process was introduced as a candidate model to describe time series of financial returns [1]. It is an Ornstein-Uhlenbeck (OU) process in which these assumptions are replaced by linearity and time-homogeneity. We employ the OU and THOU processes to analyze daily foreign exchange rates against the US dollar. We confirm that the OU process does not fit the data, while in most cases the first four cumulants patterns from data can be described by the THOU process. However, there are some exceptions in which the data do not follow linearity or time-homogeneity assumptions.

  6. Materials processing issues for non-destructive laser gas sampling (NDLGS)

    Energy Technology Data Exchange (ETDEWEB)

    Lienert, Thomas J [Los Alamos National Laboratory

    2010-12-09

    The Non-Destructive Laser Gas Sampling (NDLGS) process essentially involves three steps: (1) laser drilling through the top of a crimped tube made of 304L stainles steel (Hammar and Svennson Cr{sub eq}/Ni{sub eq} = 1.55, produced in 1985); (2) gas sampling; and (3) laser re-welding of the crimp. All three steps are performed in a sealed chamber with a fused silica window under controlled vacuum conditions. Quality requirements for successful processing call for a hermetic re-weld with no cracks or other defects in the fusion zone or HAZ. It has been well established that austenitic stainless steels ({gamma}-SS), such as 304L, can suffer from solidification cracking if their Cr{sub eq}/Ni{sub eq} is below a critical value that causes solidification to occur as austenite (fcc structure) and their combined impurity level (%P+%S) is above {approx}0.02%. Conversely, for Cr{sub eq}/Ni{sub eq} values above the critical level, solidification occurs as ferrite (bcc structure), and cracking propensity is greatly reduced at all combined impurity levels. The consensus of results from studies of several researchers starting in the late 1970's indicates that the critical Cr{sub eq}/Ni{sub eq} value is {approx}1.5 for arc welds. However, more recent studies by the author and others show that the critical Cr{sub eq}/Ni{sub eq} value increases to {approx}1 .6 for weld processes with very rapid thermal cycles, such as the pulsed Nd:YAG laser beam welding (LBW) process used here. Initial attempts at NDLGS using pulsed LBW resulted in considerable solidification cracking, consistent with the results of work discussed above. After a brief introduction to the welding metallurgy of {gamma}-SS, this presentation will review the results of a study aimed at developing a production-ready process that eliminates cracking. The solution to the cracking issue, developed at LANL, involved locally augmenting the Cr content by applying either Cr or a Cr-rich stainless steel (ER 312) to the top

  7. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing

    Science.gov (United States)

    Oliveira, R. A.; Khoramshahi, E.; Suomalainen, J.; Hakala, T.; Viljanen, N.; Honkavaara, E.

    2018-05-01

    The use of drones and photogrammetric technologies are increasing rapidly in different applications. Currently, drone processing workflow is in most cases based on sequential image acquisition and post-processing, but there are great interests towards real-time solutions. Fast and reliable real-time drone data processing can benefit, for instance, environmental monitoring tasks in precision agriculture and in forest. Recent developments in miniaturized and low-cost inertial measurement systems and GNSS sensors, and Real-time kinematic (RTK) position data are offering new perspectives for the comprehensive remote sensing applications. The combination of these sensors and light-weight and low-cost multi- or hyperspectral frame sensors in drones provides the opportunity of creating near real-time or real-time remote sensing data of target object. We have developed a system with direct georeferencing onboard drone to be used combined with hyperspectral frame cameras in real-time remote sensing applications. The objective of this study is to evaluate the real-time georeferencing comparing with post-processing solutions. Experimental data sets were captured in agricultural and forested test sites using the system. The accuracy of onboard georeferencing data were better than 0.5 m. The results showed that the real-time remote sensing is promising and feasible in both test sites.

  8. Dual Source Time-of-flight Mass Spectrometer and Sample Handling System

    Science.gov (United States)

    Brinckerhoff, W.; Mahaffy, P.; Cornish, T.; Cheng, A.; Gorevan, S.; Niemann, H.; Harpold, D.; Rafeek, S.; Yucht, D.

    We present details of an instrument under development for potential NASA missions to planets and small bodies. The instrument comprises a dual ionization source (laser and electron impact) time-of-flight mass spectrometer (TOF-MS) and a carousel sam- ple handling system for in situ analysis of solid materials acquired by, e.g., a coring drill. This DSTOF instrument could be deployed on a fixed lander or a rover, and has an open design that would accommodate measurements by additional instruments. The sample handling system (SHS) is based on a multi-well carousel, originally de- signed for Champollion/DS4. Solid samples, in the form of drill cores or as loose chips or fines, are inserted through an access port, sealed in vacuum, and transported around the carousel to a pyrolysis cell and/or directly to the TOF-MS inlet. Samples at the TOF-MS inlet are xy-addressable for laser or optical microprobe. Cups may be ejected from their holders for analyzing multiple samples or caching them for return. Samples are analyzed with laser desorption and evolved-gas/electron-impact sources. The dual ion source permits studies of elemental, isotopic, and molecular composition of unprepared samples with a single mass spectrometer. Pulsed laser desorption per- mits the measurement of abundance and isotope ratios of refractory elements, as well as the detection of high-mass organic molecules in solid samples. Evolved gas analysis permits similar measurements of the more volatile species in solids and aerosols. The TOF-MS is based on previous miniature prototypes at JHU/APL that feature high sensitivity and a wide mass range. The laser mode, in which the sample cup is directly below the TOF-MS inlet, permits both ablation and desorption measurements, to cover elemental and molecular species, respectively. In the evolved gas mode, sample cups are raised into a small pyrolysis cell and heated, producing a neutral gas that is elec- tron ionized and pulsed into the TOF-MS. (Any imaging

  9. The outlier sample effects on multivariate statistical data processing geochemical stream sediment survey (Moghangegh region, North West of Iran)

    International Nuclear Information System (INIS)

    Ghanbari, Y.; Habibnia, A.; Memar, A.

    2009-01-01

    In geochemical stream sediment surveys in Moghangegh Region in north west of Iran, sheet 1:50,000, 152 samples were collected and after the analyze and processing of data, it revealed that Yb, Sc, Ni, Li, Eu, Cd, Co, as contents in one sample is far higher than other samples. After detecting this sample as an outlier sample, the effect of this sample on multivariate statistical data processing for destructive effects of outlier sample in geochemical exploration was investigated. Pearson and Spear man correlation coefficient methods and cluster analysis were used for multivariate studies and the scatter plot of some elements together the regression profiles are given in case of 152 and 151 samples and the results are compared. After investigation of multivariate statistical data processing results, it was realized that results of existence of outlier samples may appear as the following relations between elements: - true relation between two elements, which have no outlier frequency in the outlier sample. - false relation between two elements which one of them has outlier frequency in the outlier sample. - complete false relation between two elements which both have outlier frequency in the outlier sample

  10. Pico-litre Sample Introduction and Acoustic Levitation Systems for Time Resolved Protein Crystallography Experiments at XFELS

    Directory of Open Access Journals (Sweden)

    Peter Docker

    2017-07-01

    Full Text Available The system described in this work is a variant from traditional acoustic levitation first described by, Marzo et al. It uses multiple transducers eliminating the requirement for a mirror surface, allowing for an open geometry as the sound from multiple transducers combines to generate the acoustic trap which is configured to catch pico litres of crystal slurries. These acoustic traps also have the significant benefit of eliminating potential beam attenuation due to support structures or microfluidic devices. Additionally they meet the need to eliminate sample environments when experiments are carried out using an X-ray Free Electron Lasers (XFEL such as the Linac Coherent Light Source (LCLS as any sample environment would not survive the exposure to the X-Ray beam. XFELs generate Light a billion times brighter than the sun. The application for this system will be to examine turn over in Beta lactamase proteins which is responsible for bacteria developing antibiotic resistance and therefore of significant importance to future world health. The system will allow for diffraction data to be collected before and after turnover allowing for a better understanding of the underling processes. The authors first described this work at Nanotech 2017.

  11. Time reversal signal processing in acoustic emission testing

    Czech Academy of Sciences Publication Activity Database

    Převorovský, Zdeněk; Krofta, Josef; Kober, Jan; Dvořáková, Zuzana; Chlada, Milan; Dos Santos, S.

    2014-01-01

    Roč. 19, č. 12 (2014) ISSN 1435-4934. [European Conference on Non-Destructive Testing (ECNDT 2014) /11./. Praha, 06.10.2014-10.10.2014] Institutional support: RVO:61388998 Keywords : acoustic emission (AE) * ultrasonic testing (UT) * signal processing * source location * time reversal acoustic s * acoustic emission * signal processing and transfer Subject RIV: BI - Acoustic s http://www.ndt.net/events/ECNDT2014/app/content/Slides/637_Prevorovsky.pdf

  12. Automatic drawing and CAD actualization in processing data of radiant sampling in physics prospect

    International Nuclear Information System (INIS)

    Liu Jinsheng

    2010-01-01

    In this paper discussed a method of processing radiant sampling data with computer. By this method can get expain the curve of radiant sampling data, and we can combine mineral masses and analyse and calculate them, then record the result on Notebook. There are many merites of this method: easy to learn, simple to use, high efficient. It adapts to all sorts of mines. (authors)

  13. Automatic drawing and cad actualiztion in processing data of radiant sampling in physics prospect

    International Nuclear Information System (INIS)

    Liu Jinsheng

    2010-01-01

    In this paper discussed a method of processing radiant sampling data with computer. By this method can get explain the curve of radiant sampling data, and we can combine mineral masses and analyses and calculate them, then record the result on Notebook. There are many merites of this method: easy to learn, simple to use, high efficient. It adapts to all sorts of mines. (authors)

  14. Emerging halogenated flame retardants and hexabromocyclododecanes in food samples from an e-waste processing area in Vietnam.

    Science.gov (United States)

    Tao, Fang; Matsukami, Hidenori; Suzuki, Go; Tue, Nguyen Minh; Viet, Pham Hung; Takigami, Hidetaka; Harrad, Stuart

    2016-03-01

    This study reports concentrations of selected emerging halogenated flame retardants (HFRs) and hexabromocyclododecanes (HBCDs) in foodstuffs sourced from an e-waste processing area in Vietnam and two reference sites in Vietnam and Japan. Concentrations of all target HFRs in e-waste-impacted samples in this study exceed significantly (p e-waste processing activities exert a substantial impact on local environmental contamination and human dietary exposure. Significant linear positive correlations in concentrations of syn-Dechlorane Plus (DP) and anti-DP were found between soils and those in co-located chicken samples (p e-waste processing sites and non-e-waste processing areas elsewhere.

  15. Characteristic time scales for diffusion processes through layers and across interfaces

    Science.gov (United States)

    Carr, Elliot J.

    2018-04-01

    This paper presents a simple tool for characterizing the time scale for continuum diffusion processes through layered heterogeneous media. This mathematical problem is motivated by several practical applications such as heat transport in composite materials, flow in layered aquifers, and drug diffusion through the layers of the skin. In such processes, the physical properties of the medium vary across layers and internal boundary conditions apply at the interfaces between adjacent layers. To characterize the time scale, we use the concept of mean action time, which provides the mean time scale at each position in the medium by utilizing the fact that the transition of the transient solution of the underlying partial differential equation model, from initial state to steady state, can be represented as a cumulative distribution function of time. Using this concept, we define the characteristic time scale for a multilayer diffusion process as the maximum value of the mean action time across the layered medium. For given initial conditions and internal and external boundary conditions, this approach leads to simple algebraic expressions for characterizing the time scale that depend on the physical and geometrical properties of the medium, such as the diffusivities and lengths of the layers. Numerical examples demonstrate that these expressions provide useful insight into explaining how the parameters in the model affect the time it takes for a multilayer diffusion process to reach steady state.

  16. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  17. Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt; Sørensen, Michael

    Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...

  18. Effect of sintering time at low temperature on the properties of IGZO TFTs fabricated by using the sol-gel process

    International Nuclear Information System (INIS)

    Choi, Jun Hyuk; Shim, Jong Hyun; Hwang, Soo Min

    2010-01-01

    We explored the application of the sol-gel process technique to the fabrication of InGaZnO (IGZO) thin film transistors (TFTs). We fabricated IGZO TFTs by using the sol-gel method and evaluated the effect of the sintering time on the electrical properties of the IGZO system with an atomic ratio of In:Ga:Zn = 2:1:1. In the process, IGZO precursor solutions were prepared by mixing In nitrate, Ga nitrate, and Zn acetate and were then deposited on a p-type Si-wafer covered with a thermally grown SiO 2 layer by spin-coating. The sintering process was performed for 3 h, 6 h or 12 h at 300 .deg. C in the ambient atmosphere. The source/drain electrodes of the TFT devices were fabricated using Al thermal evaporation. For all of the samples, a low off current (∼10 -1 1 A) and on-to-off current ratio (∼ 5 x 10 4 ) were obtained in their transfer curves. The saturation mobility increased with increasing sintering time: for the samples sintered for 3 h, 6 h and 12 h, the saturation mobilities were calculated to be 0.825 cm 2 /Vs, 1.65 cm 2 /Vs, and 2.06 cm 2 /Vs, respectively. Based on the XPS and TEM analyses, the enhancement of the mobility was attributed to the increase in the number of oxygen vacancies and the nanocrystalline structure in the amorphous matrix with increasing sintering time. These results demonstrate for the potential application of sol-gel processed IGZO devices on flexible polymer substrates.

  19. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    Science.gov (United States)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  20. Assessment of Processes of Change for Weight Management in a UK Sample

    Science.gov (United States)

    Andrés, Ana; Saldaña, Carmina; Beeken, Rebecca J.

    2015-01-01

    Objective The present study aimed to validate the English version of the Processes of Change questionnaire in weight management (P-Weight). Methods Participants were 1,087 UK adults, including people enrolled in a behavioural weight management programme, university students and an opportunistic sample. The mean age of the sample was 34.80 (SD = 13.56) years, and 83% were women. BMI ranged from 18.51 to 55.36 (mean = 25.92, SD = 6.26) kg/m2. Participants completed both the stages and processes questionnaires in weight management (S-Weight and P-Weight), and subscales from the EDI-2 and EAT-40. A refined version of the P-Weight consisting of 32 items was obtained based on the item analysis. Results The internal structure of the scale fitted a four-factor model, and statistically significant correlations with external measures supported the convergent validity of the scale. Conclusion The adequate psychometric properties of the P-Weight English version suggest that it could be a useful tool to tailor weight management interventions. PMID:25765163

  1. A Fault Sample Simulation Approach for Virtual Testability Demonstration Test

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yong; QIU Jing; LIU Guanjun; YANG Peng

    2012-01-01

    Virtual testability demonstration test has many advantages,such as low cost,high efficiency,low risk and few restrictions.It brings new requirements to the fault sample generation.A fault sample simulation approach for virtual testability demonstration test based on stochastic process theory is proposed.First,the similarities and differences of fault sample generation between physical testability demonstration test and virtual testability demonstration test are discussed.Second,it is pointed out that the fault occurrence process subject to perfect repair is renewal process.Third,the interarrival time distribution function of the next fault event is given.Steps and flowcharts of fault sample generation are introduced.The number of faults and their occurrence time are obtained by statistical simulation.Finally,experiments are carried out on a stable tracking platform.Because a variety of types of life distributions and maintenance modes are considered and some assumptions are removed,the sample size and structure of fault sample simulation results are more similar to the actual results and more reasonable.The proposed method can effectively guide the fault injection in virtual testability demonstration test.

  2. Real-time information and processing system for radiation protection

    International Nuclear Information System (INIS)

    Oprea, I.; Oprea, M.; Stoica, M.; Badea, E.; Guta, V.

    1999-01-01

    The real-time information and processing system has as main task to record, collect, process and transmit the radiation level and weather data, being proposed for radiation protection, environmental monitoring around nuclear facilities and for civil defence. Such a system can offer information in order to provide mapping, data base, modelling and communication and to assess the consequences of nuclear accidents. The system incorporates a number of stationary or mobile radiation monitoring equipment, weather parameter measuring station, a GIS-based information processing center and the communication network, all running on a real-time operating system. It provides the automatic data collection on-line and off-line, remote diagnostic, advanced presentation techniques, including a graphically oriented executive support, which has the ability to respond to an emergency by geographical representation of the hazard zones on the map.The system can be integrated into national or international environmental monitoring systems, being based on local intelligent measuring and transmission units, simultaneous processing and data presentation using a real-time operating system for PC and geographical information system (GIS). Such an integrated system is composed of independent applications operating under the same computer, which is capable to improve the protection of the population and decision makers efforts, updating the remote GIS data base. All information can be managed directly from the map by multilevel data retrieving and presentation by using on-line dynamic evolution of the events, environment information, evacuation optimization, image and voice processing

  3. Dependability of Data Derived from Time Sampling Methods with Multiple Observation Targets

    Science.gov (United States)

    Johnson, Austin H.; Chafouleas, Sandra M.; Briesch, Amy M.

    2017-01-01

    In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in…

  4. Effect of light intensity and irradiation time on the polymerization process of a dental composite resin

    Directory of Open Access Journals (Sweden)

    Discacciati José Augusto César

    2004-01-01

    Full Text Available Polymerization shrinkage is a critical factor affecting the longevity and acceptability of dental composite resins. The aim of this work was to evaluate the effect of light intensity and irradiation time on the polymerization process of a photo cured dental composite resin by measuring the Vickers hardness number (VHN and the volumetric polymerization shrinkage. Samples were prepared using a dental manual light-curing unit. The samples were submitted to irradiation times of 5, 10, 20 and 40 s, using 200 and 400 mW.cm-2 light intensities. Vickers hardness number was obtained at four different moments after photoactivation (immediate, 1 h, 24 h and 168 h. After this, volumetric polymerization shrinkage values were obtained through a specific density method. The values were analyzed by ANOVA and Duncan's (p = 0.05. Results showed increase in hardness values from the immediate reading to 1 h and 24 h readings. After 24 h no changes were observed regardless the light intensities or activation times. The hardness values were always smaller for the 200 mW.cm-2 light intensity, except for the 40 s irradiation time. No significant differences were detected in volumetric polymerization shrinkage considering the light intensity (p = 0.539 and the activation time (p = 0.637 factors. In conclusion the polymerization of the material does not terminate immediately after photoactivation and the increase of irradiation time can compensate a lower light intensity. Different combinations between light intensity and irradiation time, i.e., different amounts of energy given to the system, have not affected the polymerization shrinkage.

  5. Improvements to sample processing and measurement to enable more widespread environmental application of tritium.

    Science.gov (United States)

    Moran, James; Alexander, Thomas; Aalseth, Craig; Back, Henning; Mace, Emily; Overman, Cory; Seifert, Allen; Freeburg, Wilcox

    2017-08-01

    Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. We identify a current quantification limit of 92.2 TU which, combined with our small sample sizes, correlates to as little as 0.00133Bq of total T activity. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of both natural and artificial T behavior in the environment. Copyright © 2017. Published by Elsevier Ltd.

  6. Rapid process development of chromatographic process using direct analysis in real time mass spectrometry as a process analytical technology tool.

    Science.gov (United States)

    Yan, Binjun; Chen, Teng; Xu, Zhilin; Qu, Haibin

    2014-06-01

    The concept of quality by design (QbD) is widely applied in the process development of pharmaceuticals. However, the additional cost and time have caused some resistance about QbD implementation. To show a possible solution, this work proposed a rapid process development method, which used direct analysis in real time mass spectrometry (DART-MS) as a process analytical technology (PAT) tool for studying the chromatographic process of Ginkgo biloba L., as an example. The breakthrough curves were fast determined by DART-MS at-line. A high correlation coefficient of 0.9520 was found between the concentrations of ginkgolide A determined by DART-MS and HPLC. Based on the PAT tool, the impacts of process parameters on the adsorption capacity were discovered rapidly, which showed a decreased adsorption capacity with the increase of the flow rate. This work has shown the feasibility and advantages of integrating PAT into QbD implementation for rapid process development. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Measurement of time processing ability and daily time management in children with disabilities.

    Science.gov (United States)

    Janeslätt, Gunnel; Granlund, Mats; Kottorp, Anders

    2009-01-01

    Improvement is needed in methods for planning and evaluating interventions designed to facilitate daily time management for children with intellectual disability, Asperger syndrome, or other developmental disorders. The aim of this study was to empirically investigate the hypothesized relation between children's time processing ability (TPA), daily time management, and self-rated autonomy. Such a relationship between daily time management and TPA may support the idea that TPA is important for daily time management and that children with difficulties in TPA might benefit from intervention aimed at improving daily time management. Participants were children aged 6 to 11 years with dysfunctions such as attention-deficit/hyperactivity disorder, autism, or physical or intellectual disabilities (N = 118). TPA was measured with the instrument KaTid. All data were transformed to interval measures using applications of Rasch models and then further analysed with correlation and regression analysis. The results demonstrate a moderate significant relation between the parents' ratings of daily time management and TPA of the children, and between the self-rating of autonomy and TPA. There was also a significant relation between self-ratings of autonomy and the parents' rating of the children's daily time management. Parents' ratings of their children's daily time management explain 25% of the variation in TPA, age of the children explains 22%, while the child's self-rating of autonomy can explain 9% of the variation in TPA. The three variables together explain 38% of the variation in TPA. The results indicate the viability of the instrument for assessing TPA also in children with disabilities and that the ability measured by KaTid is relevant for daily time management. TPA seems to be a factor for children's daily time management that needs to be taken into consideration when planning and evaluating interventions designed to facilitate everyday functioning for children with

  8. On-site identification of meat species in processed foods by a rapid real-time polymerase chain reaction system.

    Science.gov (United States)

    Furutani, Shunsuke; Hagihara, Yoshihisa; Nagai, Hidenori

    2017-09-01

    Correct labeling of foods is critical for consumers who wish to avoid a specific meat species for religious or cultural reasons. Therefore, gene-based point-of-care food analysis by real-time Polymerase Chain Reaction (PCR) is expected to contribute to the quality control in the food industry. In this study, we perform rapid identification of meat species by our portable rapid real-time PCR system, following a very simple DNA extraction method. Applying these techniques, we correctly identified beef, pork, chicken, rabbit, horse, and mutton in processed foods in 20min. Our system was sensitive enough to detect the interfusion of about 0.1% chicken egg-derived DNA in a processed food sample. Our rapid real-time PCR system is expected to contribute to the quality control in food industries because it can be applied for the identification of meat species, and future applications can expand its functionality to the detection of genetically modified organisms or mutations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Automated force volume image processing for biological samples.

    Directory of Open Access Journals (Sweden)

    Pavel Polyakov

    2011-04-01

    Full Text Available Atomic force microscopy (AFM has now become a powerful technique for investigating on a molecular level, surface forces, nanomechanical properties of deformable particles, biomolecular interactions, kinetics, and dynamic processes. This paper specifically focuses on the analysis of AFM force curves collected on biological systems, in particular, bacteria. The goal is to provide fully automated tools to achieve theoretical interpretation of force curves on the basis of adequate, available physical models. In this respect, we propose two algorithms, one for the processing of approach force curves and another for the quantitative analysis of retraction force curves. In the former, electrostatic interactions prior to contact between AFM probe and bacterium are accounted for and mechanical interactions operating after contact are described in terms of Hertz-Hooke formalism. Retraction force curves are analyzed on the basis of the Freely Jointed Chain model. For both algorithms, the quantitative reconstruction of force curves is based on the robust detection of critical points (jumps, changes of slope or changes of curvature which mark the transitions between the various relevant interactions taking place between the AFM tip and the studied sample during approach and retraction. Once the key regions of separation distance and indentation are detected, the physical parameters describing the relevant interactions operating in these regions are extracted making use of regression procedure for fitting experiments to theory. The flexibility, accuracy and strength of the algorithms are illustrated with the processing of two force-volume images, which collect a large set of approach and retraction curves measured on a single biological surface. For each force-volume image, several maps are generated, representing the spatial distribution of the searched physical parameters as estimated for each pixel of the force-volume image.

  10. Model Checking Process Algebra of Communicating Resources for Real-time Systems

    DEFF Research Database (Denmark)

    Boudjadar, Jalil; Kim, Jin Hyun; Larsen, Kim Guldstrand

    2014-01-01

    This paper presents a new process algebra, called PACOR, for real-time systems which deals with resource constrained timed behavior as an improved version of the ACSR algebra. We define PACOR as a Process Algebra of Communicating Resources which allows to express preemptiveness, urgent ness...

  11. Design and realization of real-time processing system for seismic exploration

    International Nuclear Information System (INIS)

    Zhang Sifeng; Cao Ping; Song Kezhu; Yao Lin

    2010-01-01

    For solving real-time seismic data processing problems, a high-speed, large-capacity and real-time data processing system is designed based on FPGA and ARM. With the advantages of multi-processor, DRPS has the characteristics of high-speed data receiving, large-capacity data storage, protocol analysis, data splicing, data converting from time sequence into channel sequence, no dead time data ping-pong storage, etc. And with the embedded Linux operating system, DRPS has the characteristics of flexibility and reliability. (authors)

  12. Gamma processes and peaks-over-threshold distributions for time-dependent reliability

    International Nuclear Information System (INIS)

    Noortwijk, J.M. van; Weide, J.A.M. van der; Kallen, M.J.; Pandey, M.D.

    2007-01-01

    In the evaluation of structural reliability, a failure is defined as the event in which stress exceeds a resistance that is liable to deterioration. This paper presents a method to combine the two stochastic processes of deteriorating resistance and fluctuating load for computing the time-dependent reliability of a structural component. The deterioration process is modelled as a gamma process, which is a stochastic process with independent non-negative increments having a gamma distribution with identical scale parameter. The stochastic process of loads is generated by a Poisson process. The variability of the random loads is modelled by a peaks-over-threshold distribution (such as the generalised Pareto distribution). These stochastic processes of deterioration and load are combined to evaluate the time-dependent reliability

  13. Rigid Body Sampling and Individual Time Stepping for Rigid-Fluid Coupling of Fluid Simulation

    Directory of Open Access Journals (Sweden)

    Xiaokun Wang

    2017-01-01

    Full Text Available In this paper, we propose an efficient and simple rigid-fluid coupling scheme with scientific programming algorithms for particle-based fluid simulation and three-dimensional visualization. Our approach samples the surface of rigid bodies with boundary particles that interact with fluids. It contains two procedures, that is, surface sampling and sampling relaxation, which insures uniform distribution of particles with less iterations. Furthermore, we present a rigid-fluid coupling scheme integrating individual time stepping to rigid-fluid coupling, which gains an obvious speedup compared to previous method. The experimental results demonstrate the effectiveness of our approach.

  14. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  15. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  16. On the automation of periodic hard real-time processes: a graph-theoretical approach

    OpenAIRE

    Boode, Antoon Hendrik

    2018-01-01

    Summary In certain single-core mono-processor configurations, e.g. embedded control systems like robotic applications comprising many short processes, process context switches may consume a considerable amount of the available processing power. Reducing the number of context switches decreases the execution time and thereby increases the performance of the application. Furthermore, the end-to-end processing time suffers from the idle time of the processor, because, for example, processes have...

  17. Continuous-time Markov decision processes theory and applications

    CERN Document Server

    Guo, Xianping

    2009-01-01

    This volume provides the first book entirely devoted to recent developments on the theory and applications of continuous-time Markov decision processes (MDPs). The MDPs presented here include most of the cases that arise in applications.

  18. Statistical sampling method for releasing decontaminated vehicles

    International Nuclear Information System (INIS)

    Lively, J.W.; Ware, J.A.

    1996-01-01

    Earth moving vehicles (e.g., dump trucks, belly dumps) commonly haul radiologically contaminated materials from a site being remediated to a disposal site. Traditionally, each vehicle must be surveyed before being released. The logistical difficulties of implementing the traditional approach on a large scale demand that an alternative be devised. A statistical method (MIL-STD-105E, open-quotes Sampling Procedures and Tables for Inspection by Attributesclose quotes) for assessing product quality from a continuous process was adapted to the vehicle decontamination process. This method produced a sampling scheme that automatically compensates and accommodates fluctuating batch sizes and changing conditions without the need to modify or rectify the sampling scheme in the field. Vehicles are randomly selected (sampled) upon completion of the decontamination process to be surveyed for residual radioactive surface contamination. The frequency of sampling is based on the expected number of vehicles passing through the decontamination process in a given period and the confidence level desired. This process has been successfully used for 1 year at the former uranium mill site in Monticello, Utah (a CERCLA regulated clean-up site). The method forces improvement in the quality of the decontamination process and results in a lower likelihood that vehicles exceeding the surface contamination standards are offered for survey. Implementation of this statistical sampling method on Monticello Projects has resulted in more efficient processing of vehicles through decontamination and radiological release, saved hundreds of hours of processing time, provided a high level of confidence that release limits are met, and improved the radiological cleanliness of vehicles leaving the controlled site

  19. Data processing system for NBT experiments

    International Nuclear Information System (INIS)

    Takahashi, C.; Hosokawa, M.; Shoji, T.; Fujiwara, M.

    1981-07-01

    Data processing system for Nagoya Bumpy Torus (NBT) has been developed. Since plasmas are produced and heated in steady state by use of high power microwaves, sampling and processing data prevails in long time scale on the order of one minute. The system, which consists of NOVA 3/12 minicomputer and many data acquisition devices, is designed to sample and process large amount of data before the next discharge starts. Several features of such long time scale data processing system are described in detail. (author)

  20. Model checking process algebra of communicating resources for real-time systems

    DEFF Research Database (Denmark)

    Boudjadar, Jalil; Kim, Jin Hyun; Larsen, Kim Guldstrand

    2014-01-01

    This paper presents a new process algebra, called PACoR, for real-time systems which deals with resource- constrained timed behavior as an improved version of the ACSR algebra. We define PACoR as a Process Algebra of Communicating Resources which allows to explicitly express preemptiveness...

  1. Virtual sampling in variational processing of Monte Carlo simulation in a deep neutron penetration problem

    International Nuclear Information System (INIS)

    Allagi, Mabruk O.; Lewins, Jeffery D.

    1999-01-01

    In a further study of virtually processed Monte Carlo estimates in neutron transport, a shielding problem has been studied. The use of virtual sampling to estimate the importance function at a certain point in the phase space depends on the presence of neutrons from the real source at that point. But in deep penetration problems, not many neutrons will reach regions far away from the source. In order to overcome this problem, two suggestions are considered: (1) virtual sampling is used as far as the real neutrons can reach, then fictitious sampling is introduced for the remaining regions, distributed in all the regions, or (2) only one fictitious source is placed where the real neutrons almost terminate and then virtual sampling is used in the same way as for the real source. Variational processing is again found to improve the Monte Carlo estimates, being best when using one fictitious source in the far regions with virtual sampling (option 2). When fictitious sources are used to estimate the importances in regions far away from the source, some optimization has to be performed for the proportion of fictitious to real sources, weighted against accuracy and computational costs. It has been found in this study that the optimum number of cells to be treated by fictitious sampling is problem dependent, but as a rule of thumb, fictitious sampling should be employed in regions where the number of neutrons from the real source fall below a specified limit for good statistics

  2. Unambiguous range-Doppler LADAR processing using 2 giga-sample-per-second noise waveforms

    International Nuclear Information System (INIS)

    Cole, Z.; Roos, P.A.; Berg, T.; Kaylor, B.; Merkel, K.D.; Babbitt, W.R.; Reibel, R.R.

    2007-01-01

    We demonstrate sub-nanosecond range and unambiguous sub-50-Hz Doppler resolved laser radar (LADAR) measurements using spectral holographic processing in rare-earth ion doped crystals. The demonstration utilizes pseudo-random-noise 2 giga-sample-per-second baseband waveforms modulated onto an optical carrier

  3. Unambiguous range-Doppler LADAR processing using 2 giga-sample-per-second noise waveforms

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Z. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States)]. E-mail: cole@s2corporation.com; Roos, P.A. [Spectrum Lab, Montana State University, P.O. Box 173510, Bozeman, MT 59717 (United States); Berg, T. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Kaylor, B. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Merkel, K.D. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States); Babbitt, W.R. [Spectrum Lab, Montana State University, P.O. Box 173510, Bozeman, MT 59717 (United States); Reibel, R.R. [S2 Corporation, 2310 University Way 4-1, Bozeman, MT 59715 (United States)

    2007-11-15

    We demonstrate sub-nanosecond range and unambiguous sub-50-Hz Doppler resolved laser radar (LADAR) measurements using spectral holographic processing in rare-earth ion doped crystals. The demonstration utilizes pseudo-random-noise 2 giga-sample-per-second baseband waveforms modulated onto an optical carrier.

  4. Systematic identification and robust control design for uncertain time delay processes

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2011-01-01

    A systematic procedure is proposed to handle the standard process control problem. The considered standard problem involves infrequent step disturbances to processes with large delays and measurement noise. The process is modeled as an ARX model and extended with a suitable noise model in order...... to reject unmeasured step disturbances and unavoidable model errors. This controller is illustrated to perform well for both set point tracking and a disturbance rejection for a SISO process example of a furnace which has a time delay which is significantly longer than the dominating time constant....

  5. The relaxation time of processes in a FitzHugh-Nagumo neural system with time delay

    International Nuclear Information System (INIS)

    Gong Ailing; Zeng Chunhua; Wang Hua

    2011-01-01

    In this paper, we study the relaxation time (RT) of the steady-state correlation function in a FitzHugh-Nagumo neural system under the presence of multiplicative and additive white noises and time delay. The noise correlation parameter λ can produce a critical behavior in the RT as functions of the multiplicative noise intensity D, the additive noise intensity Q and the time delay τ. That is, the RT decreases as the noise intensities D and Q increase, and increases as the time delay τ increases below the critical value of λ. However, above the critical value, the RT first increases, reaches a maximum, and then decreases as D, Q and τ increase, i.e. a noise intensity D or Q and a time delay τ exist, at which the time scales of the relaxation process are at their largest. In addition, the additive noise intensity Q can also produce a critical behavior in the RT as a function of λ. The noise correlation parameter λ first increases the RT of processes, then decreases it below the critical value of Q. Above the critical value, λ increases it.

  6. Process algebra with timing : real time and discrete time

    NARCIS (Netherlands)

    Baeten, J.C.M.; Middelburg, C.A.; Bergstra, J.A.; Ponse, A.J.; Smolka, S.A.

    2001-01-01

    We present real time and discrete time versions of ACP with absolute timing and relative timing. The starting-point is a new real time version with absolute timing, called ACPsat, featuring urgent actions and a delay operator. The discrete time versions are conservative extensions of the discrete

  7. Process algebra with timing: Real time and discrete time

    NARCIS (Netherlands)

    Baeten, J.C.M.; Middelburg, C.A.

    1999-01-01

    We present real time and discrete time versions of ACP with absolute timing and relative timing. The startingpoint is a new real time version with absolute timing, called ACPsat , featuring urgent actions and a delay operator. The discrete time versions are conservative extensions of the discrete

  8. Development of a real-time multiplex PCR assay for the detection of multiple Salmonella serotypes in chicken samples

    Directory of Open Access Journals (Sweden)

    Whyte Paul

    2008-09-01

    Full Text Available Abstract Background A real-time multiplex PCR assay was developed for the detection of multiple Salmonella serotypes in chicken samples. Poultry-associated serotypes detected in the assay include Enteritidis, Gallinarum, Typhimurium, Kentucky and Dublin. The traditional cultural method according to EN ISO 6579:2002 for the detection of Salmonella in food was performed in parallel. The real-time PCR based method comprised a pre-enrichment step in Buffered Peptone Water (BPW overnight, followed by a shortened selective enrichment in Rappaport Vasilliadis Soya Broth (RVS for 6 hours and subsequent DNA extraction. Results The real-time multiplex PCR assay and traditional cultural method showed 100% inclusivity and 100% exclusivity on all strains tested. The real-time multiplex PCR assay was as sensitive as the traditional cultural method in detecting Salmonella in artificially contaminated chicken samples and correctly identified the serotype. Artificially contaminated chicken samples resulted in a detection limit of between 1 and 10 CFU per 25 g sample for both methods. A total of sixty-three naturally contaminated chicken samples were investigated by both methods and relative accuracy, relative sensitivity and relative specificity of the real-time PCR method were determined to be 89, 94 and 87%, respectively. Thirty cultures blind tested were correctly identified by the real-time multiplex PCR method. Conclusion Real-time PCR methodology can contribute to meet the need for rapid identification and detection methods in food testing laboratories.

  9. Evaluation of statistical methods for quantifying fractal scaling in water-quality time series with irregular sampling

    Science.gov (United States)

    Zhang, Qian; Harman, Ciaran J.; Kirchner, James W.

    2018-02-01

    River water-quality time series often exhibit fractal scaling, which here refers to autocorrelation that decays as a power law over some range of scales. Fractal scaling presents challenges to the identification of deterministic trends because (1) fractal scaling has the potential to lead to false inference about the statistical significance of trends and (2) the abundance of irregularly spaced data in water-quality monitoring networks complicates efforts to quantify fractal scaling. Traditional methods for estimating fractal scaling - in the form of spectral slope (β) or other equivalent scaling parameters (e.g., Hurst exponent) - are generally inapplicable to irregularly sampled data. Here we consider two types of estimation approaches for irregularly sampled data and evaluate their performance using synthetic time series. These time series were generated such that (1) they exhibit a wide range of prescribed fractal scaling behaviors, ranging from white noise (β = 0) to Brown noise (β = 2) and (2) their sampling gap intervals mimic the sampling irregularity (as quantified by both the skewness and mean of gap-interval lengths) in real water-quality data. The results suggest that none of the existing methods fully account for the effects of sampling irregularity on β estimation. First, the results illustrate the danger of using interpolation for gap filling when examining autocorrelation, as the interpolation methods consistently underestimate or overestimate β under a wide range of prescribed β values and gap distributions. Second, the widely used Lomb-Scargle spectral method also consistently underestimates β. A previously published modified form, using only the lowest 5 % of the frequencies for spectral slope estimation, has very poor precision, although the overall bias is small. Third, a recent wavelet-based method, coupled with an aliasing filter, generally has the smallest bias and root-mean-squared error among all methods for a wide range of

  10. Academic Training: Real Time Process Control - Lecture series

    CERN Multimedia

    Françoise Benz

    2004-01-01

    ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 7, 8 and 9 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Real Time Process Control T. Riesco / CERN-TS What exactly is meant by Real-time? There are several definitions of real-time, most of them contradictory. Unfortunately the topic is controversial, and there does not seem to be 100% agreement over the terminology. Real-time applications are becoming increasingly important in our daily lives and can be found in diverse environments such as the automatic braking system on an automobile, a lottery ticket system, or robotic environmental samplers on a space station. These lectures will introduce concepts and theory like basic concepts timing constraints, task scheduling, periodic server mechanisms, hard and soft real-time.ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch

  11. Real-time risk monitoring in business processes : a sensor-based approach

    NARCIS (Netherlands)

    Conforti, R.; La Rosa, M.; Fortino, G.; Hofstede, ter A.H.M.; Recker, J.; Adams, M.

    2013-01-01

    This article proposes an approach for real-time monitoring of risks in executable business process models. The approach considers risks in all phases of the business process management lifecycle, from process design, where risks are defined on top of process models, through to process diagnosis,

  12. Laser-induced breakdown spectroscopy for the real-time analysis of mixed waste samples containing Sr

    International Nuclear Information System (INIS)

    Barefield, J.E. II; Koskelo, A.C.; Multari, R.A.; Cremers, D.A.; Gamble, T.K.; Han, C.Y.

    1995-01-01

    In this report, the use of Laser-induced breakdown spectroscopy to analyze mixed waste samples containing Sr is discussed. The mixed waste samples investigated include vitrified waste glass and contaminated soil. Compared to traditional analysis techniques, the laser-based method is fast (i.e., analysis times on the order of minutes) and essentially waste free since little or no sample preparation is required. Detection limits on the order of pmm Sr were determined. Detection limits obtained using a fiber optic cable to deliver laser pulses to soil samples containing Cr, Zr, Pb, Be, Cu, and Ni will also be discussed

  13. Development of a real-time PCR to detect Demodex canis DNA in different tissue samples.

    Science.gov (United States)

    Ravera, Ivan; Altet, Laura; Francino, Olga; Bardagí, Mar; Sánchez, Armand; Ferrer, Lluís

    2011-02-01

    The present study reports the development of a real-time polymerase chain reaction (PCR) to detect Demodex canis DNA on different tissue samples. The technique amplifies a 166 bp of D. canis chitin synthase gene (AB 080667) and it has been successfully tested on hairs extracted with their roots and on formalin-fixed paraffin embedded skin biopsies. The real-time PCR amplified on the hairs of all 14 dogs with a firm diagnosis of demodicosis and consistently failed to amplify on negative controls. Eleven of 12 skin biopsies with a morphologic diagnosis of canine demodicosis were also positive. Sampling hairs on two skin points (lateral face and interdigital skin), D. canis DNA was detected on nine of 51 healthy dogs (17.6%) a much higher percentage than previously reported with microscopic studies. Furthermore, it is foreseen that if the number of samples were increased, the percentage of positive dogs would probably also grow. Moreover, in four of the six dogs with demodicosis, the samples taken from non-lesioned skin were positive. This finding, if confirmed in further studies, suggests that demodicosis is a generalized phenomenon in canine skin, due to proliferation of local mite populations, even though macroscopic lesions only appear in certain areas. The real-time PCR technique to detect D. canis DNA described in this work is a useful tool to advance our understanding of canine demodicosis.

  14. Mathematical analysis of the real time array PCR (RTA PCR) process

    NARCIS (Netherlands)

    Dijksman, Johan Frederik; Pierik, A.

    2012-01-01

    Real time array PCR (RTA PCR) is a recently developed biochemical technique that measures amplification curves (like with quantitative real time Polymerase Chain Reaction (qRT PCR)) of a multitude of different templates in a sample. It combines two different methods in order to profit from the

  15. Efficient Processing of Multiple DTW Queries in Time Series Databases

    DEFF Research Database (Denmark)

    Kremer, Hardy; Günnemann, Stephan; Ivanescu, Anca-Maria

    2011-01-01

    . In many of today’s applications, however, large numbers of queries arise at any given time. Existing DTW techniques do not process multiple DTW queries simultaneously, a serious limitation which slows down overall processing. In this paper, we propose an efficient processing approach for multiple DTW...... for multiple DTW queries....

  16. 48 CFR 852.271-72 - Time spent by counselee in counseling process.

    Science.gov (United States)

    2010-10-01

    ... counseling process. 852.271-72 Section 852.271-72 Federal Acquisition Regulations System DEPARTMENT OF... Clauses 852.271-72 Time spent by counselee in counseling process. As prescribed in 871.212, insert the following clause: Time Spent by Counselee in Counseling Process (APR 1984) The contractor agrees that no...

  17. An approach for sampling solid heterogeneous waste at the Hanford Site waste receiving and processing and solid waste projects

    International Nuclear Information System (INIS)

    Sexton, R.A.

    1993-03-01

    This paper addresses the problem of obtaining meaningful data from samples of solid heterogeneous waste while maintaining sample rates as low as practical. The Waste Receiving and Processing Facility, Module 1, at the Hanford Site in south-central Washington State will process mostly heterogeneous solid wastes. The presence of hazardous materials is documented for some packages and unknown for others. Waste characterization is needed to segregate the waste, meet waste acceptance and shipping requirements, and meet facility permitting requirements. Sampling and analysis are expensive, and no amount of sampling will produce absolute certainty of waste contents. A sampling strategy is proposed that provides acceptable confidence with achievable sampling rates

  18. A conceptual framework for intelligent real-time information processing

    Science.gov (United States)

    Schudy, Robert

    1987-01-01

    By combining artificial intelligence concepts with the human information processing model of Rasmussen, a conceptual framework was developed for real time artificial intelligence systems which provides a foundation for system organization, control and validation. The approach is based on the description of system processing terms of an abstraction hierarchy of states of knowledge. The states of knowledge are organized along one dimension which corresponds to the extent to which the concepts are expressed in terms of the system inouts or in terms of the system response. Thus organized, the useful states form a generally triangular shape with the sensors and effectors forming the lower two vertices and the full evaluated set of courses of action the apex. Within the triangle boundaries are numerous processing paths which shortcut the detailed processing, by connecting incomplete levels of analysis to partially defined responses. Shortcuts at different levels of abstraction include reflexes, sensory motor control, rule based behavior, and satisficing. This approach was used in the design of a real time tactical decision aiding system, and in defining an intelligent aiding system for transport pilots.

  19. Sensitive detection of porcine DNA in processed animal proteins using a TaqMan real-time PCR assay.

    Science.gov (United States)

    Pegels, N; González, I; Fernández, S; García, T; Martín, R

    2012-01-01

    A TaqMan real-time PCR method was developed for specific detection of porcine-prohibited material in industrial feeds. The assay combines the use of a porcine-specific primer pair, which amplifies a 79 bp fragment of the mitochondrial (mt) 12 S rRNA gene, and a locked nucleic acid (LNA) TaqMan probe complementary to a target sequence lying between the porcine-specific primers. The nuclear 18 S rRNA gene system, yielding a 77 bp amplicon, was employed as a positive amplification control to monitor the total content of amplifiable DNA in the samples. The specificity of the porcine primers-probe system was verified against different animal and plant species, including mammals, birds and fish. The applicability of the real-time PCR protocol to detect the presence of porcine mt DNA in feeds was determined through the analysis of 190 industrial feeds (19 known reference and 171 blind samples) subjected to stringent processing treatments. The performance of the method allows qualitative and highly sensitive detection of short fragments from porcine DNA in all the industrial feeds declared to contain porcine material. Although the method has quantitative potential, the real quantitative capability of the assay is limited by the existing variability in terms of composition and processing conditions of the feeds, which affect the amount and quality of amplifiable DNA.

  20. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.